Thursday, December 17, 2009

Natural selection expresses the idea that organisms (i.e. their genes) vary and that variability has consequences. Some variants are unfit and go extinct, others adapt and do well. This process, repeated over millions of years, has given us the variety of life on earth.

Many authors have played with the idea how to combine these insights from evolutionary biology to changes in culture, the notion of ‘memes’ being one of them. Richard Dawkins proposed that human culture is composed of a multitude of particulate units, memes, which are analogous to the genes of biological transmission. These cultural replicators are transmitted by imitation between members of a community and are subject to mutational-evolutionary pressures over time.

Recently researchers at Imperial College London started yet another attempt to try to show if, and how, natural selection might play a role in music. They are currently running an online experiment hoping to find support for this idea:

Friday, November 27, 2009

Last month an interesting review was published in the journal Trends in Cognitive Sciences arguing that ‘predictive representations of temporal regularities constitute the core of auditory objects in the brain.’ A possible consequence of this argument is that auditory sensory memory and (temporal) predictions are simply two sides of the same coin.

The authors (among which István Winkler and Sue Denham that collaborated with our Amsterdam group in the EmCAP project; see earlier blogs), review much of the recent literature using brain imaging and electrophysiological techniques. They support their hypothesis on the basis of at least five observations (and I paraphrase the authors here):

First, auditory regularity representations are temporally persistent; they have been shown to connect sounds separated by up to circa 10 seconds and persist for at least 30 seconds.

Second, auditory regularity representations encode all sound features with a resolution comparable to perception, since perceptually discriminable deviations elicit a Mismatch Negativity (MMN).

Third, when two sound streams are perceptually separated, MMN reflects the perceived sound organization, its elicitation dynamically follows perceptual fluctuations between two alternative sound organizations and the effects of priming sequences on perception.

Fourth, regularities are extracted from acoustically widely different exemplars in a sequence, including the natural variation of environmental sounds.

And finally, violations of predictive rules have been shown to elicit the MMN. For example, delivering a low tone after a short one elicited the MMN, when for most tones the rule “short tones are followed by high-pitched tones, long tones by low-pitched tones” held.

Interestingly, violations in the form of silence (i.e. no sound) - such as omissions in a natural drum-pattern - also show a MMN. And in addition, these effects are also found when attention is directed to other aspects than the sound /music or when participants are unattentive (such as in the case with sleeping neonates).

Thursday, October 22, 2009

A simple 'yes' is the short answer, I believe. I do an attempt to explain this in a book that is about to be published in Dutch (English, and other languages are planned for 2010/11). The evidence comes from researchers from all over the globe. Standing on the shoulders of giants... it turned out to be a great view...

Saturday, September 19, 2009

This week a brief update consisting of a short interview with Ani Patel (Senior Fellow at the Neuroscience Institute in San Diego, US) at a conference workshop at Indiana University-Purdue University Indianapolis (IUPUI) talking about Snowball: the dancing cockatoo that so gracefully helped boosting the visibility of research in the neuroscience and cognition of music. The other video shows Snowball (and his owner Irene Schulz) at the World Science Festival. Is Snowball listening or imitating?

Tuesday, September 15, 2009

Last week an interesting study was published (online) that provides evidence that music exposure facilitates neuroplasticity in rats. While I feel quite uncomfortable with using animals for these studies (especially if you read the explicit method sections of these kind of neurobiological papers :-\) , the results could well contribute to a better insight in how music might be functional in the neurohabilitation of humans.

About sixty rats were divided in four groups, two of which had callosotomy performed on them: a small section of the brain was removed just after they were born, an area that is considered important for e.g. spatial memory. The research elaborates on earlier studies that showed music to have an effect on hippocampal neurogenesis, as well as facilitated spatial memory (e.g., Kim et al., 2006).

The authors conclude that an enriched sound environment -exposing rats to piano music- helps the recovery from neural damage. Rats with a damaged brain showed signs of recovery after about fifty days of listening to Mozart piano sonates for about 12 hours a day. Compared to rats that also had brain damage, but that did not listen to music, they performed significantly better in a spatial memory task (finding their way in a maze) and in their emotional reactivity (using a marble burying task).

While it remains unclear whether sounds other than music would have the same effect, the study is a striking example of research showing that music has a larger role in shaping the brain than previously thought.

Friday, September 11, 2009

Studying earworms (or ‘brainworms’ as Oliver Sacks names them) is a topic that would make an ideal PhD thesis: it is a striking, yet unexplained phenomenon, and a research question that is around for quite a while, and (embarrassingly for music cognition) without a sufficient answer. One of the reasons might be - comparable to studying déjà vu’s - that to think of an experiment that can capture the phenomenon when it occurs, is quite a challenge. And, as far as I am aware, no explanation has appeared, as yet, in the scientific journals.

Nevertheless, there is something to say about the structural aspects of the melodies that tend to function as earworms. Most sticky songs are relative simple in terms of their harmonic structure, but have a striking moment - the hook of the song. It is the point in the music where something catchy happens. It is precisely the moment where you would start singing a song from memory (see more at [1]). That said: this is just an after-the-fact interpretation, not a explanation.

Wednesday, July 29, 2009

For scientists it is nothing special: traveling all summer, visiting several workshops and conferences.
You get to present your years' work in a presentation of just a few minutes (after hours of traveling), and hear a huge number of talks by others (who also have to squeeze their years’ work in a fifteen minutes talk).

Nevertheless, it can be refreshing, these meetings: novel insights, strange data, elegant formalizations or just fun interpretations, all condensed in these strange ten minutes of attention...

This week the Cogsci -Cognitive Science- Conference is in Amsterdam (the first time I will go to a conference on my bike!).

"In recent years, the study of music perception and cognition has witnessed an enormous growth of interest. Music cognition is an intrinsically interdisciplinary subject which combines insights and research methods from many of the cognitive sciences. This trend is clearly reflected, for example, in the contributions in special issues on music, published by journals such as Nature, Cognition, Nature Neuroscience, and Connection Science. This symposium focuses on music learning and processing and will feature perspectives from cognitive neuroscience, experimental psychology, computational modeling, linguistics, and musicology. The objective is to bring together researchers from different research fields and traditions in order to discuss the progress made, and future directions to take, in the interdisciplinary study of music cognition. The symposium also aims to illustrate how closely the area of music cognition is linked to topics and debates in the cognitive sciences."

Sunday, July 26, 2009

This summer you will find less postings than usual on this blog. This is because I'm busy finalizing a book and a number of research proposals, hopefully generating some new jobs in music cognition :-) So no worry, this is just a summer dip ...

Saturday, May 30, 2009

Last week a paper was published in PLoS-ONE suggesting a relation between AVPR1A-Haplotypes and musical creativity. A group of Finish researchers analyzed 19 families with a total of 343 family members on their musical aptitude —using the Seashore test and a test developed by one of the authors— and their DNA profiles. They were able to show an association between these and related genes and levels of musical creativity. The research contrasts earlier research with twins that suggested no such relation (e.g., Coon & Carey, 1989). The authors propose the interesting hypothesis that music perception and creativity in music are linked to the same phenotypic spectrum of human cognitive social skills, like human bonding and altruism, both associated with AVPR1A. Music as a form of ‘extreme’ bonding behavior...

It was just a matter of time for such a study to emerge. Still, the results of this study are merely correlational. I like to think of the capacity for music as shared instead of being special, and a result of complex nature and nurture interactions.

Monday, May 04, 2009

This year several new insights were published on the phenomenon of beat induction.* Beat induction is the cognitive skill that allows us to hear a regular pulse in music to which we can synchronize. It allows us to dance and make music together. Hence it is considered a skill that must have contributed to the origins of music. Without it, making music would be quite difficult.

Most of these recent studies try to support (or falsify) the criteria that beat induction, as a cognitive skill that allows for music, should fulfill — it should at least be a) special to music (domain-specific), b) develop spontaneously (or be innate), and c) be uniquely human (human-specific).

In earlier blogs I discussed some recent evidence that beat induction is active in newborns, providing support for the innate criterion. This week two new studies appeared in Current Biology challenging the human-specific criterion (see also BBC News or Dutch radio).

The evidence is compelling (and will cost me two bottles of wine). Both the studies of Schachner et al. and Patel et al. show that it is unlikely due to chance that one cockatoo and twentyfive parrots synchronized to music.

Especially Patel and Iversen’s tempo controlled experiment is interesting because there it could be studied whether the cockatoo is actually listening to the music. Although the current paper is only reporting on bouts where the cockatoo synchronized (selected by the researchers !), some tests show this is not simply due to chance. However, synchrony in about ten percent of all recordings is not a lot for a bird that seems to enjoy dancing and almost constantly moves to the music.

Furthermore, it is surprising that Schacher et al. state that none of their bird-subjects was 'explicitly trained to produce movement in response to acoustic material.' This is at least not true for the cockatoo Snowball who was analyzed in both studies. As Patel et al. write, Snowball (likely) learned his foot-lifting behavior from a previous owner making arm movements in synchrony while dancing (to music).

Snowball needs to be in the mood for dancing and has to be enthusiastically spoken too to start him up. It suggests an important role of the owner/trainer being present at the experiment (by the way, it is unclear whether the researchers were actually present at these recording sessions). In addition, during at least half of the experiments the current owner was nodding her head (apparently not systematically influencing the results). It seems Snowball deserves a more formal, yet attractive setting in the near future.

Overall, it makes me interpret these data as learned behavior and a mimicking phenomenon, more than an innate or spontaneously developing form of beat induction that humans have.

Nevertheless, it interesting to think what makes parrots and cockatoos receptive to beat induction, instead of our closer relatives like chimpanzees or bonobo’s? Patel suggests the vocal learning hypothesis: the capacity for entrainment as a by-product of selecting for vocal-mimicking, with both needing modality-specific links between auditory and motor representations. Others believe it is the particular rhythmic chorusing (as a behavior of complex social groups) that is the source of the behavior. I’m currently simply ‘confused’, the best a new empirical finding can do!

Sunday, April 19, 2009

For a long time I thought of it as quite a peculiar phenomenon: grown-ups who, the moment they spot a baby, start talking in a curious dialect. A dialect that has unclear semantics, little or no grammar, and is full of exaggerated rhythmic and melodic diversions.

Nevertheless, babies seem to love it. They react —cooing with pleasure—to melodies that are not unlike pop songs as ‘De do do do, de da da da’ of The Police or ‘La la la’ by Kylie Minoque.

This babbling or, more formally, infant-direct speech (IDS) differs from normal adult speech by its high pitch, exaggerated melodic contours, slower tempo, and more rhythmic variation. A kind of ‘musilanguage’ indeed. It is a widespread phenomenon that is —as far as we know— present in all cultures and has more similarities than differences -- even when some characteristics of IDS conflict with the rules of the adult language (e.g. Chinese). So it seems quite unlikely that IDS is ‘just’ a preparation for language, until recently the most common interpretation.

Laurel Trainor, and her team at McMaster University (Ontario, Canada), suggests that IDS is essentially a tool to communicate emotion. The decoding of the speech patterns into their emotional meaning is something infants can do easily, and long before they learn about language. In that sense, it seems likely that language makes use of faculties special to music instead of it being a side effect of language (as as suggested once by a well-known cognitive psychologist).

Tuesday, April 14, 2009

Glenn Schellenberg of the University of Toronto just started an online internet experiment on Absolute Pitch (AP). If you have, or suspect you have, absolute pitch do the online test here. It takes about 15 minutes and you get your score in the end.

Friday, April 10, 2009

Below a video impression of an evening that was organized this week by the Studium Generale of the University of Groningen. The idea of the lecture/concert was to explore tempo and timing, swing and groove from the perspective of both the performer and the listener (an idea that turned out not always to be a success ;-) See for a longer fragment here.

Sunday, March 29, 2009

In the Netherlands (and I’m sure there are versions of it in the UK and the US as well) there is a weekly radio show containing a returning item in which music experts are asked to compare and judge two or three CD recordings of the same piece, without knowing who the musicians are. They have to guess the performers and describe why they do (or don’t) like that particular performance.

How well would you do in such a test? The common hypothesis is that experts do this much better, e.g. under the assumption that they are more sensitive in their listening skills. But do experts indeed hear more detali and more nuances when compared to a 'common listener'? Or do they just have more terminology available to verbalize these differences?

Two years ago our group did a large-scale online listening experiment with a similar task. Participants were asked to compare several pairs of recordings of well-known musicians. One of the recordings was taken directly from a CD, but the other was originally performed at another tempo (faster or slower) and then scaled to be similar in tempo to the former recording. The task was to judge which recording was real and which one was manipulated, by focusing on the timing used by the performer.

The results were recently published in the Journal of Experimental Psychology, with a surprising outcome: the judgments seem to be largely influenced by exposure to music (listening a lot to one’s favorite music) and not (at all) by the level of expertise (amount of formal musical training). One seems to learn a lot by simply listening.

* The first recording is the original. It is Glenn Gould performing English Suite No. 4 by J.S. Bach. The second recording is Sviatoslav Richter performing the same piece. However, this recording was sped up from 70 to 87 bpm making his use of temporubato 'unnatural'.

Thursday, March 05, 2009

Karl Popper was a philosopher of science that was very much interested in this question. He tried to distinguish 'science' from 'pseudoscience', but got more and more dissatisfied with the idea that the empirical method (supporting a theory with observations and experiments) could effectively mark this distinction. He sometimes used the example of astrology “with its stupendous mass of empirical evidence based on observation”, but also nuanced it by stating that “science often errs, and that pseudoscience may happen to stumble on the truth.”

Next to his well-known work on falsification, Popper started to develop alternatives to determine the scientific status or quality of a theory. He wrote the complex yet intriguing sentence “confirmations [of a theory] should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.” (Popper, 1963).

Popper was especially thrilled with the result of Eddington’s eclipse observations, which in 1919 brought the first important confirmation of Einstein's theory of gravitation. It was the surprising consequence of this theory that light should bend in the presence of large, heavy objects (Einstein was apparently willing to drop his theory if this would not be the case). Independent of whether such a prediction turns out to be true or not, Popper considered it an important quality of ‘real science’ to make such ‘risky predictions’. Interesting thought, not?

I still find this an intriguing idea. The notion of ‘risky’ or ‘surprising predictions’ might actually be the beginning of a fruitful alternative to existing model selection techniques, such as goodness-of-fit (which theory predicts the data best) and simplicity (which theory gives the simplest explanation). Also in music cognition measures like goodness-of-fit (r-squared, percentage variance accounted for, and other measures from the experimental psychology toolkit) are often used to confirm a theory. Nevertheless, it is non-trivial to think of theories that make surprising predictions. That is, a theory that predicts a yet unknown phenomenon as a consequence of the intrinsic structure of the theory itself. If you know of any, let me know!

K. R. Popper (1963). Conjectures and Refutations. London: Routledge.

* Repeated blog entry from July 23, 2007 (celebrating finalizing a research proposal with Jan-Willem Romeijn on these issues, hoping to be able to address these issues head-on ;-)

Tuesday, January 27, 2009

It might look somewhat disturbing, but the picture that accompanies this entry is a snapshot of a two day old baby that is healthy and sound asleep! She is one of fourteen newborns that participated in a recent listening experiment, a collaboration between the Institute for Psychology of the Hungarian Academy of Sciences and our research group at the University of Amsterdam in the Netherlands. In this project we are interested in how newborn infants perceive the musical world around them and in how far certain musical skills are innate.

We know that newborn infants are sensitive to a variety of sounds. But what do they factually hear? Can they make sense of the musical world around them? Do they have a sense of rhythm, arguably one of the fundaments of music?

To study this, we collaborated with a research group in Budapest, Hungary lead by István Winkler, a specialist in auditory perception and one of the pioneers in measuring brain activity in neonates.

Since the start of this European research project (named EmCAP) we talked a lot about how we could take advantage of existing theories in music cognition to study auditory perception in newborn infants, and how to probe their (potential) sense of rhythm. After many pilot studies, and resolving quite a few methodological issues that come with doing experiments with neonates, in the end we opted to use a simple, regular rock rhythm, consisting of hi-hat, snare, and bass drum (see below). We made several variants of this rock rhythm by omitting strokes on non-significant metrical positions (i.e. non-syncopated rhythms in music theoretical terms). We then inserted, once in a while, a 'deviant' segment: the same rhythm but with a missing ‘downbeat’ (i.e. a syncopated rhythm). The result sounded like this [click on the play button; to stop, click again]:

Since it is quite difficult to observe behavioral reactions in newborns a small number of electrodes were carefully glued to the scalp and face of the newborns to be able to measure their electrical brain signals (see photo). N.B. The baby’s were fed just before the measurements with their mother being present during the whole session that lasted twenty minutes.

What did the experiment reveal? Well, shortly after each ‘deviant’ segment began, the babies' brains produced an electrical response indicating that they had expected to hear the downbeat but had not. As such we could show that newborn infants can detect the beat in music (The results will be published this week in PNAS Early Edition).

What are the potential implications of these findings? For me, one of the most important realizations is that a cognitive skill called beat induction, which most of us think of as trivial (e.g., being able to tap your foot to the beat), is active so early in life. It can be seen as additional support for the idea that, beat perception contributed to the origins of music since it enabling such actions as clapping, making music together and dancing to a rhythm. Next to being music-specific, beat induction is also considered to be uniquely human. Even our closest evolutionary relatives, such as the chimpanzee and bonobo, do not synchronize their behavior to rhythmic sounds. This makes the topic of beat induction a fundamental issue in current music cognition research (see, e.g., Patel, 2008:402).

Furthermore, the results challenge some earlier assumptions that beat induction is learned in the first few months of life, for example by parents rocking the infant. Our study suggests that beat perception must be either innate or learned in the womb (as the auditory system is at least partly functional as of approximately three month before birth).

Finally, it should be noted that the auditory capabilities underlying beat induction are also necessary for bootstrapping communication by sounds, allowing infants to adapt to the rhythm of the caretaker’s speech and to find out when to respond to it or to interject their own vocalization. Therefore, although these results are compatible with the notion of the genetic origin of music in humans, they do not provide the final answer in this longstanding debate.

Wednesday, January 07, 2009

This is a short entry with a video impression of the Spinoza te Paard lecture series on recent developments in science, aimed at a general audience. The full broadcast can be viewed at spinozadebat gemist.