Much of what we know and love about music is based on implicitly acquired mental representations of musical pitches and the relationships between them. While previous studies have shown that these mental representations of music can be acquired rapidly and can influence preference, it is still unclear which aspects of music influence learning and preference formation. This article reports two experiments that use an artificial musical system to examine two questions: (1) which aspects of music matter most for learning, and (...) (2) which aspects of music matter most for preference formation. Two aspects of music are tested: melody and harmony. In Experiment 1 we tested the learning and liking of a new musical system that is manipulated melodically so that only some of the possible conditional probabilities between successive notes are presented. In Experiment 2 we administered the same tests for learning and liking, but we used a musical system that is manipulated harmonically to eliminate the property of harmonic whole-integer ratios between pitches. Results show that disrupting melody (Experiment 1) disabled the learning of music without disrupting preference formation, whereas disrupting harmony (Experiment 2) does not affect learning and memory but disrupts preference formation. Results point to a possible dissociation between learning and preference in musical knowledge. (shrink)

Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine (...) the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams. (shrink)

In recent issues of this journal, Roger Scruton and Malcolm Budd have debated the question whether hearing a melody in a sequence of sounds necessarily involves an ‘unasserted thought’ about spatial movement. According to Scruton, the answer is ‘yes’; according to Budd, the answer is ‘no’. The conclusion of this paper is that, while Budd may have underestimated the viability of Scruton's thesis in one of its possible interpretations, there is no good reason to assume that the thesis is (...) true. Very briefly, the argument for the second part of the conclusion is that we can account for all the data adduced by Scruton in favour of his hypothesis by means of hypotheses that are far less daring. (shrink)

This paper aims to examine the awesome, almost spiritual feeling I experience as an ?extreme spectator? while watching Kelly Slater ride the monstrous waves of Pipeline. Drawing on the aesthetics of Kant and Schopenhauer, I examine the experience of the sublime and how it, in conjunction with the perceived kinetic melody of Slater's movements and his karmic connection to the environment in which he thrives, gives rise to the deeply felt awe of the extreme spectator. My intention is to (...) use Slater's case as a paradigm that can be applied to many other athletic performances which share the characteristics discussed in the paper. (shrink)

The stormy development of vocal production during the first postnatal weeks is generally underestimated. Our longitudinal studies revealed an amazingly fast unfolding and combinatorial complexification of pre-speech melodies. We argue that relying on “melody” could provide for the immature brain a kind of filter to extract life-relevant information from the complex speech stream.

It has long been known from the extant ancient Greek musical documents that some composers correlated melodic contour with word accents. Up to now, the evidence of this compositional technique has been judged impressionistically. In this article a statistical method of interpretation through computer simulation is set forth and applied to the musical texts, focusing on the convention of correlating a word¿s accent with the highest pitch level in the melody for that word: the Pitch Height Rule. The results (...) provide a sounder basis for judging evidence for the operation of this convention in specific pieces and a sharper delineation of its use in the history of ancient Greek music. The ¿rule¿ was used by at least some composers from the late second century BC through the second century AD, but there is no certainty that it was used before or after this period. In some cases where previous scholars have discovered the rule¿s operation, statistical analysis casts doubt. Of special interest is the showing that one piece long judged as offering no evidence of the use of the rule probably displays an inversion or parody of the rule for rhetorical-musical effect. (shrink)

A great deal of effort has been, and continues to be, devoted to developing consciousness artificially (A small selection of the many authors writing in this area includes: Cotterill (J Conscious Stud 2:290–311, 1995 , 1998 ), Haikonen ( 2003 ), Aleksander and Dunmall (J Conscious Stud 10:7–18, 2003 ), Sloman ( 2004 , 2005 ), Aleksander ( 2005 ), Holland and Knight ( 2006 ), and Chella and Manzotti ( 2007 )), and yet a similar amount of effort has (...) gone in to demonstrating the infeasibility of the whole enterprise (Most notably: Dreyfus ( 1972/1979 , 1992 , 1998 ), Searle ( 1980 ), Harnad (J Conscious Stud 10:67–75, 2003 ), and Sternberg ( 2007 ), but there are a great many others). My concern in this paper is to steer some navigable channel between the two positions, laying out the necessary pre-conditions for consciousness in an artificial system, and concentrating on what needs to hold for the system to perform as a human being or other phenomenally conscious agent in an intersubjectively-demanding social and moral environment. By adopting a thick notion of embodiment—one that is bound up with the concepts of the lived body and autopoiesis (Maturana and Varela 1980 ; Varela et al. 2003 ; and Ziemke 2003 , 2007a , J Conscious Stud 14(7):167–179, 2007b )—I will argue that machine phenomenology is only possible within an embodied distributed system that possesses a richly affective musculature and a nervous system such that it can, through action and repetition, develop its tactile-kinaesthetic memory, individual kinaesthetic melodies pertaining to habitual practices, and an anticipatory enactive kinaesthetic imagination. Without these capacities the system would remain unconscious, unaware of itself embodied within a world. Finally, and following on from Damasio’s ( 1991 , 1994 , 1999 , 2003 ) claims for the necessity of pre-reflective conscious, emotional, bodily responses for the development of an organism’s core and extended consciousness, I will argue that without these capacities any agent would be incapable of developing the sorts of somatic markers or saliency tags that enable affective reactions, and which are indispensable for effective decision-making and subsequent survival. My position, as presented here, remains agnostic about whether or not the creation of artificial consciousness is an attainable goal. (shrink)

The formation of coherent percepts requires grouping together spatio-temporally disparate sensory inputs. Two major questions arise: (1) is awareness necessary for this process; and (2) can non-conscious elements of the sensory input be grouped into a conscious perceptµ To address this question, we tested two patients suffering from severe left auditory extinction following right hemisphere damage. In extinction, patients are unaware of the presence of left side stimuli when they are presented simultaneously with right side stimuli. We used the ‘scale (...) illusion’ to test whether extinguished tones on the left can be incorporated into the content of conscious awareness. In the scale illusion, healthy listeners obtain the illusion of distinct melodies, which are the result of grouping of information from both ears into illusory auditory streams. We show that the two patients were susceptible to the scale illusion while being consciously unaware of the stimuli presented on their left. This suggests that awareness is not necessary for auditory grouping and non-conscious elements can be incorporated into a conscious percept. (shrink)

. A structured awareness of time lies at the core of the law's distinctive normativity. Melody is offered as a rough model of this mindfulness of time, since some important features of this awareness are also present in a hearer's grasp of melody. The model of melody is used, first, to identify some temporal dimensions of intentional action and then to highlight law's mindfulness of time. Its role in the structure of legal thinking, and especially in precedent‐sensitive (...) legal reasoning, is explored. This article argues further that melody‐modeled mindfulness of time is evident also at a deeper and more pervasive level, giving structure to the distinctive mode of law's normative guidance. The article draws one important theoretical consequence from this exploration, namely, that the normative coherence of momentary legal systems depends conceptually on their coherence over time. (shrink)

Asked about Wittgenstein's contribution to aesthetics, one might think first of all of his discussion of ‘family resemblance’ concepts, in which he argued that the various instances of games, for example, need not have any feature or set of features in common, in virtue of which they are all called games; the concept of a game can function perfectly well without any such set of conditions. This insight was soon applied to the much debated quest for a definition of the (...) word ‘art’, and it was claimed that here too the various instances of art were related by way of family resemblance, so that it was futile to look for a condition or set of conditions, which works of art, and only works of art, had in common. Wittgenstein himself did not extend his argument to the concept of art. Although he was deeply interested in the arts, especially music, he wrote very little on aesthetics, his most sustained treatment of the topic being available for us only in the form of notes taken of a set of his lectures on aesthetics. (shrink)

The mere exposure phenomenon (repeated exposure to a stimulus is sufficient to improve attitudes toward that stimulus) is one of the most inspiring phenomena associated with Robert Zajonc’s long and productive career in social psychology. In the first part of this article, Richard Moreland (who was trained by Zajonc in graduate school) describes his own work on exposure and learning, and on the relationships among familiarity, similarity, and attraction in person perception. In the second part, Sascha Topolinski (a recent graduate (...) who never met Zajonc, but found his ideas inspirational) describes his own work concerning embodiment and fluency in the mere exposure effect. Also, several avenues for future research on the mere exposure phenomenon are identified, further demonstrating its continuing relevance to the field. (shrink)

Why does major music sound happy and minor music sound sad? The idea that different musical modes are best suited to the expression of different emotions has been prescribed by composers, music theorists, and natural philosophers for millennia. However, the reason we associate musical modes with emotions remains a matter of debate. On one side there is considerable evidence that mode-emotion associations arise through exposure to the conventions of a particular musical culture, suggesting a basis in lifetime learning. On the (...) other, cross-cultural comparisons suggest that the particular associations we make are supported by musical similarities to the prosodic characteristics of the voice in different affective states, indicating a basis in the biology of emotional expression. Here, I review developmental and cross-cultural studies on the affective character of musical modes, concluding that while learning clearly plays a role, the emotional associations we make are (1) not arbitrary, and (2) best understood by also taking into account the physical characteristics and biological purposes of vocalization. (shrink)

Although Newman’s Fifteenth Oxford University Sermon is often considered a precursor to An Essay on the Development of Christian Doctrine (1845), the following essay views this Sermon as an expression of Newman’s personal struggle from 1839 to 1845: in the midst of confusion, he pondered; against the threat of liberal skepticism, he defended truth; in the face of doubt, he reaffirmed his relationship with God.

This fMRI study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words, pitch and rhythm. Univariate and multivariate analyses were performed on the brain activity patterns of six conditions, arranged in a subtractive hierarchy: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; as well as the pure musical or speech rhythm. Systematic contrasts between these (...) balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. The left IFG was involved in word- and pitch-related processing in speech, the right IFG in processing pitch in song. Furthermore, the IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the activity of IFG and IPS. (shrink)

Musicians imagine music during mental rehearsal, when reading from a score, and while composing. An important characteristic of music is its temporality. Among the parameters that vary through time is sound intensity, perceived as patterns of loudness. Studies of mental imagery for melodies (i.e. pitch and rhythm) show interference from concurrent musical pitch and verbal tasks, but how we represent musical changes in loudness is unclear. Theories suggest that our perceptions of loudness change relate to our perceptions of force or (...) effort, implying a motor representation. An experiment was conducted to investigate the modalities that contribute to imagery for loudness change. Musicians performed a within-subjects loudness change recall task, comprising 48 trials. First, participants heard a musical scale played with varying patterns of loudness, which they were asked to remember. There followed an empty interval of 8 seconds (nil distractor control), or the presentation of a series of 4 sine tones, or 4 visual letters or 3 conductor gestures, also to be remembered. Participants then saw an unfolding score of the notes of the scale, during which they were to imagine the corresponding scale in their mind while adjusting a slider to indicate the imagined changes in loudness. Finally, participants performed a recognition task of the tone, letter or gesture sequence. Based on the motor hypothesis, we predicted that observing and remembering conductor gestures would impair loudness change scale recall, while observing and remembering tone or letter string stimuli would not. Results support this prediction, with loudness change recalled less accurately in the gestures condition than in the control condition. An effect of musical training suggests that auditory and motor imagery ability may be closely related to domain expertise. (shrink)

Following in a psychological and musicological tradition beginning with Leonard Meyer, and continuing through David Huron, we present a functional, cognitive account of the phenomenon of expectation in music, grounded in computational, probabilistic modeling. We summarize a range of evidence for this approach, from psychology, neuroscience, musicology, linguistics, and creativity studies, and argue that simulating expectation is an important part of understanding a broad range of human faculties, in music and beyond.

We tested changes in cortical functional response to auditory configural learning by training ten human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music). We measured covariation in blood oxygenation signal to increasing pitch-interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature of interest. A psychophysical staircase procedure with feedback was used for training over a two-week period. Behavioral tests of discrimination ability performed before and (...) after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch-interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch-interval size, such that those who had a higher sensitivity to pitch-interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex specifically to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch-interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. (shrink)