Books and Journals in Humanities, Social Science and Performing Arts

Ludomusicology

Approaches to Video Game Music

Edited by

Michiel Kamp [+–]

Utrecht University

Michiel Kamp is Junior Assistant Professor of Music at Utrecht University, where he specializes in teaching music in film and digital media music. Michiel completed his AHRC-funded Ph.D. in Music at Cambridge University with his thesis, ‘Four ways of hearing video game music’, which focused on phenomenological approaches to listening. As well as presenting his research at international conferences, he has also written for journals including Philosophy and Technology. He has co-edited the forthcoming special issue on video game music for The Soundtrack.

Tim Summers [+–]

Royal Holloway

Tim Summers is a Teaching Fellow in Music at Royal Holloway, University of London. He has written on music in modern popular culture for journals including the Journal of the Royal Musical Association, Journal of Film Music and Music, Sound and the Moving Image. He is the author of Understanding Video Game Music (2016).

Mark Sweeney [+–]

Mark Sweeney completed his D.Phil. thesis (entitled “The Aesthetics of Videogame Music”) in musicology at Hertford College, Oxford. His primary research interest is on aesthetic theory and video game music. He was previously lecturer in music at St Catherine’s College, Oxford.

The last half-decade has seen the rapid and expansive development of video game music studies. As with any new area of study, this significant sub-discipline is still tackling fundamental questions concerning how video game music should be approached. In this volume, experts in game music provide their responses to these issues.

This book suggests a variety of new approaches to the study of game music. In the course of developing ways of conceptualizing and analyzing game music it explicitly considers other critical issues including the distinction between game play and music play, how notions of diegesis are complicated by video game interactivity, the importance of cinema aesthetics in game music, the technicalities of game music production and the relationships between game music and art music traditions.

This collection is accessible, yet theoretically substantial and complex. It draws upon a diverse array of perspectives and presents new research which will have a significant impact upon the way that game music is studied. The volume represents a major development in game musicology and will be indispensable for both academic researchers and students of game music.

Table of Contents

Michiel Kamp is Junior Assistant Professor of Music at Utrecht University, where he specializes in teaching music in film and digital media music. Michiel completed his AHRC-funded Ph.D. in Music at Cambridge University with his thesis, ‘Four ways of hearing video game music’, which focused on phenomenological approaches to listening. As well as presenting his research at international conferences, he has also written for journals including Philosophy and Technology. He has co-edited the forthcoming special issue on video game music for The Soundtrack.

Royal Holloway

Tim Summers is a Teaching Fellow in Music at Royal Holloway, University of London. He has written on music in modern popular culture for journals including the Journal of the Royal Musical Association, Journal of Film Music and Music, Sound and the Moving Image. He is the author of Understanding Video Game Music (2016).

Mark Sweeney completed his D.Phil. thesis (entitled “The Aesthetics of Videogame Music”) in musicology at Hertford College, Oxford. His primary research interest is on aesthetic theory and video game music. He was previously lecturer in music at St Catherine’s College, Oxford.

The editors’ introduction situates the essays in this collection within the wider landscape of the study of video game music. In particular, we are concerned with the relationship between games and music – a domain of research that is commonly referred to as ‘ludomusicology’. We first provide a brief history of the study of game music, highlighting the significant research findings and scholarly work that has led to the present state of the discipline. Secondly, we discuss the specific challenges to which the essays here presented respond, and how the research advances understanding of these issues. We trace the recurring themes of the volume through the chapters, such as medium specificity on the one hand (Chapters 2, 5, 7 and 9), and theway that video game music is embedded in musical and popular culture on the other (Chapters 3, 4, 6, 8, 10 and 11). Because one of the strengths of this volume is the multiple perspectives provided by the contributors, we highlight both where the authors are in agreement in their conclusions about game music, and where several alternative theories are expressed. We examine the reasons for such disparity, and how an appreciation of this difference is informative for game music scholars. We show how the research in the book advances understanding of game music in a significant way. Finally, we consider the most pertinent questions that remain unanswered in the study of game music and indicate how the chapters in this volume supply the discipline with new tools and theoretical apparatus for tackling these challenges. We conclude by arguing for the significance of video game music studies, both as a subject of study within its own right, and as a topic that informs, and interacts with, other areas of academic scholarship.

Tim Summers is a Teaching Fellow in Music at Royal Holloway, University of London. He has written on music in modern popular culture for journals including the Journal of the Royal Musical Association, Journal of Film Music and Music, Sound and the Moving Image. He is the author of Understanding Video Game Music (2016).

This chapter considers a very straightforward question: How does one go about analyzing video game music? With any youthful field of study, certain vital issues concerning the practicality of conducting research into the topic quickly arise. No guidance yet exists for students and scholars seeking to engage with video games as musical sources, and the chapter rectifies this lack of information. The chapter outlines the methods and sources available for analyzing video game music, as well as the problems faced by ludomusicologists. As such, it functions as a practical guide to researching and analyzing video game music, suitable both for the veteran researcher and the new analyst. The discussion is supported by case studies and a survey of the existing literature that enacts particular analytical approaches to game music. The body of the essay considers how the game code, game engine, music engine, hardware, recorded gameplay, music data, published scores and commercial audio recordings may be used for research. It provides examples of the analyses of these sources, as well as considering how each source type comes with particular advantages and disadvantages. These sources all have their own particular roles to play in the analysis of game music, and it is essential that analysts maintain a critical appreciation of the nature of each source that they use.

Isabella van Elferen is Professor of Music and School Director of Research at Kingston University London. She publishes on film, TV, and video game music, musical critical theory, Gothic, horror, SF and fantasy, and baroque sacred music. Her most recent book is the award-winning Gothic Music: The Sounds of the Uncanny.

While quantitative studies show overwhelming evidence that music and sound are crucial for video game immersion, the definition of game musical immersion itself has only been hinted at in very general terms. Ermi and Mäyrä (2005) maintain that game soundtracks lead to ‘sensory’ immersion; Karen Collins describes audiovisual involvement as ‘imaginative immersion’ (2008); Timothy Crick (2011) and Gordon Calleja (2011) contend that soundtracks rather induce an ‘affective’ involvement. These claims illustrate the lack of academic consensus and understanding of musical game immersion, and urge a systematic theorisation of this aspect of gaming to offer insight into questions pertaining to how musical player involvement is brought about or which factors play a role in it. It is the aim of this chapter to outline a research model that can bridge the gap between the practice of and the reflection on game music. The chapter proposes three overlapping, music-specific working concepts leading to a comprehensive framework charting the conditions for and mechanics of musical player involvement. This framework, the ALI-model, shows how musical affect, literacy and interaction cooperate in a process of signification, identification and play leading to game musical involvement. The chapter will explore each of the three components of the ALI model.

Elizabeth Medina-Gray received her Ph.D. in Music Theory from Yale University in 2014, where she completed her dissertation on the analysis of modular music in video games. Her research interests include music in video games and interactive multimedia, and 20th-century tonal music.

One of the most critical–and critically puzzling–aspects of video game music is that it is fundamentally dynamic; it is ‘changeable,’ and it ‘reacts both to changes in the gameplay environment and/or in response to the player’ (Collins, 2008). More specifically, nearly all video games employ modularity as a fundamental basis for dynamic music. Gradual changes in volume, tempo, or other parameters may also yield dynamic effects, but the actual musical content we hear while playing a game arises mainly from a collection of pre-composed pieces of music–that is, modules–which get triggered in real time as we play. This chapter contributes to our understanding of video game music and its dynamic qualities by placing it within a larger conceptual context of modularity. The current chapter provides the first thorough examination of modularity in video game music as such. I adopt and adapt a concept of modularity from Saunders (2008), in which I define modular music as any system consisting of a collection of modules and a set of rules that dictate how those modules may combine, and which ultimately undergoes a process of assembly to yield the final sounding music. I then explore how video game music fits into this broad framework, and identify specific ways in which game music is similar to other types of modular music, and ways in which it is unique. Two aspects of game music, especially, distinguish it from other types of modular music, and thus surface as important focal points for future studies of this music: (1) game music often provides information that the player actively uses while playing the game, or in other words, this music has a usability function (after Jørgensen, 2009); and (2) game music relies increasingly on a concept of smoothness (or lack thereof) between modules as they combine during gameplay. Finally, by examining existing scholarly approaches to other modular music—especially of the 20th-century avant-garde—I suggest two main directions through which we might productively analyze video game music together with its dynamic qualities. This chapter argues that by attending to the dynamic and modular aspects of video game music, we gain a more complete understanding of this music within the context of games and gameplay. In this way, modularity becomes a vital and valuable component in the developing field of ludomusicology.

Michiel Kamp is Junior Assistant Professor of Music at Utrecht University, where he specializes in teaching music in film and digital media music. Michiel completed his AHRC-funded Ph.D. in Music at Cambridge University with his thesis, ‘Four ways of hearing video game music’, which focused on phenomenological approaches to listening. As well as presenting his research at international conferences, he has also written for journals including Philosophy and Technology. He has co-edited the forthcoming special issue on video game music for The Soundtrack.

While a typical Hollywood film aims to present a consistent and coherent fictional world through methods such as continuity editing, video games, by virtue of their rule-bound nature, have no such obligations (see Juul, 2005). This means a video game’s fictional world or diegesis can be fragmented and interrupted to a far larger extent than that of a film, for instance by interface and pause-and-play mechanics, or by the death of a player avatar: when Mario dies in Super Mario Bros. (Nintendo, 1985), the diegesis is interrupted by a game-over screen, and the story world resets to a previous point in time. At the same time, the player’s control of the game can be taken away at any time without interruption of the diegesis, for instance, during non-interactive cutscenes that move the game’s story forward. One of the few elements of a game that can play a part in each of these four situations is the nondiegetic musical soundtrack. In this chapter, I offer an account of music as overarching a game in its totality, in the hopes of both broadening our understanding of the unique contribution of music to games, and of what makes music in games different from music in other media. I propose that there are two primary ways of understanding music beyond diegetic gameplay sequences: as peritexts (Genette, 1997; Summers, 2012) and as a structuring device that relates different situations to each other in time. As a peritext, music has relations to title cards and credit sequences in film, but also to films’ other paratextual elements such as DVD menus; as a structuring device, music works along the lines of what Claudia Gorbman (1987) calls suture in film music. Following a short overview of the different kinds of peritextual music found in video games, the chapter is then divided into two sets of case studies that exemplify peritextual and structural music. The first looks at the Max Payne (Remedy Entertainment and Rockstar, 2001–2012) and Battlefield(Digital Illusions/EA, 2002–2011) series, that both have strong ties to Hollywood tropes and narratives, including an emphasis on a continuous diegetic experience. I argue that the musical cues unique to nondiegetic situations in these games—specifically loading screens—function in similar ways to establishing shots in films. The second set of case studies describes how music structures the nondiegetic gameplay event of the death of an avatar in three different platform games. Unlike the games in the first set, two-dimensional platform games have a ‘looser’ diegetic world, as is exemplified by Super Mario Bros., Super Meat Boy (Team Meat, 2010) and Braid (Number None, 2008). Each of these games uses music to structure the gameplay situation surrounding the death of the player’s avatar in a different way.

Melanie Fritsch M.A. is a Berlin-based researcher. She worked as research assistant at the University of Bayreuth between 2008 and 2013, and is currently finishing her PhD thesis “Performing bytes. Musikperformances der Computerspielkultur”. She is also a member of the AHRC research network “Guitar Heroes in Music Education? Music-based video-games and their potential for musical and performative creativity”.

Since its earliest incarnations, video gaming can be seen as a form of ‘participatory culture’ (Jenkins, 1992). Besides exploring what can be done within the game itself, players also started to experiment with the soft- and hardware components, the artwork, or the narratives of games in order to create something themselves. The prompt emergence of practices like demos, remixes, chiptunes, or – of course – the creation of self-made levels or entire games is a vivid demonstration of the desire not just to play the games, but also to play with them. The same holds true for video game music. Typing, for example, “Super Mario Bros theme” into the YouTube search will deliver a huge number of videos, in which people have played this tune or recreated it in every way imaginable (see Collins 2013). Furthermore, people also use the sound and/or visual style of the Mario video games in order to create tributes, crossover music videos or their own songs. With the “Automatic Mario” series, for example, a distinct subgenre of remix videos has been created. But these forms and practises do not come out of nowhere. As José Zagal puts it: “In summary, games exist in a broader cultural context, and it is important to use this cultural context in order to help understand a game and vice versa.” (Zagal 2010, p. 28) This also applies to video game music. When people play with the sounds, the music, or create music videos, etc. they not only remix the music – they also mingle several musical and cultural practises, using their knowledge of, and competencies regarding, the several cultural contexts they deal with. After briefly surveying the field of the culture of video game music outside of video games, using the creative work which has arisen around the Super Mario games as an example case, the chapter will investigate how this activity interrelates with other musical cultures, and how such relations become manifest in the creative works people produce. “The Super Mario 8-bit Opera” by Jon and Al Kaplan and its further processing by the online community will be analyzed in the form of a case study. As a theoretical starting point, concepts of games literacy as introduced by Zagal (2010) and Zimmerman (2009), and its application to the field of game music (van Elferen, in this book) will be used.

Anahid Kassabian is a music and media scholar, with a special focus on listening in audiovisual contexts. She is the author of Ubiquitous Listening (2013), Hearing Film (2011) and numerous journal articles and book chapters. She is the co-editor of Ubiquitous Musics (2013) and Keeping Score (1997), past editor of both Music, Sound, and the Moving Image and Journal of Popular Music Studies, and past chair of the International Association for the Study of Popular Music.

Liverpool University

Freya Jarman is Senior Lecturer in Music at the University of Liverpool, and works primarily on cultures and ideologies of the voice, especially in relation to gender, sexuality and queer theory. She is the author of Queer Voices (Palgrave 2011), and editor of Oh Boy! Masculinities and Popular Music (Routledge 2007).

This chapter considers the ways that music as a central activity is represented in video games and smartphone apps. It specifically considers artifacts that might be labeled ‘music games’, or ‘music apps’. Consideration of the term ‘music games’ offers up a knot of multiple conceptual strands, and the chapter first uses existing scholarship to explore the components that make up the notions of ‘play’ and ‘game’. There are two main types of music programmes: games in which the player is right or wrong, such as Taiko: Drum Master (Namco, 2004), or apps and other kinds of programmes, in which the player engages in ‘sandbox’ play, such as in the iPhone app Bloom (Eno and Chilvers/Opal, 2008). We show that music is represented in virtual worlds as either disciplinary or liberatory, and sometimes both. We conclude that games are virtual worlds in which a telos is established in the form of a prelusory goal; one strives to achieve something and is measured accordingly. The apps, however, do not so much create virtual worlds as they offer the tools to explore the audiovisual possibilities of the digital realm, in which one can make sounds and shapes and colours and patterns and rhythms ad infinitum for very low cost and with very little expertise whatsoever. This world of unstructured (or perhaps semi-structured) play precludes any significant prelusory goal beyond perhaps ‘enjoyment’ or ‘make pleasant sounds’. We see a distinct split in the representation of music in these two kinds of virtual worlds. The games present music as a task to be achieved, a disciplinary activity in which the player should ‘play’ again and again until the game (and thereby music) is mastered, while the apps allow for dipping in and out, for the most casual of engagements as well as more intensive devotion, and for experimentation and creativity. While it is clear to any practicing musician that the two styles of musical activity are inextricably intertwined, and cannot be pulled apart, music in virtual worlds seems only to be able to appear—or perhaps sound?—as one or the other, but not both.

Samantha Blickhan is a Ph.D. candidate at Royal Holloway, University of London. She holds undergraduate degrees in Vocal Performance and English Literature from the University of Iowa, and a Masters in Musicology from the University of Oxford. Her Ph.D. thesis examines visual representations of sound, with a particular focus on the notation and palaeography of medieval song, and also considers modern methods of interaction with music notation in the fields of research and teaching.

Within the past five years, technology has brought the popular music album into a new realm of possibility. What was once a purely auditory experience is now able to stimulate other senses, such as touch and sight, and can offer a physically, visually, and mentally engaging experience to the consumer. In this chapter, I examine Björk’s album Biophilia (2011), released both as a traditional recorded album and as an interactive multimedia project available for use on the iPad and iPhone. The interactive nature of Biophilia is in many ways comparable to a video game, allowing users to make choices that affect the music being heard during use. This drastic change to the format of the music album demands a response to the question: how does an interactive relationship affect the way we listen to music? This chapter uses the song ‘Solstice’ as a case study. The song’s iPad presentation is examined and the analysis shows how user actions affect the way in which the consumer may hear and engage with the song. Through this discussion I will consider how offering users choices is part of what makes Biophilia game-like, but also how the structure of the app and the results of use may disqualify the album from truly being considered a video game. Due to the ambiguous presentation of player agency in the interactive versions of the Biophilia songs, the line between game play and music play can at times become blurred. It is important to discuss this ambiguity of intent in order to more clearly understand the possible results users can achieve. It can be argued that a directly interactive experience with a piece of music allows the listener to become invested in that music through personal choices, resulting in a wholly different experience than simply listening to an album of music. By recognizing their own presence within a song as a collaborator, the listener engages the music on a strongly personal level, but do the varying results of interaction cause Björk’s own recorded ‘fixed’ version of each song to somehow matter less? This chapter will endeavour to facilitate discussion about where the line between creation and interpretation should (or even can?) be drawn, and the possible challenges of presenting the manipulation of an original piece of musical art as a game.

Stephen Baysted is Reader in Film Composition at the University of Chichester. He is also Audio Director, Sound Designer and Composer for Slightly Mad Studios. His practice-based research has focused on questions of immersion, diegesis and genre, and has involved his work on Need for Speed: Shift (2009), Shift 2: Unleashed (2011) and Project C.A.R.S. (2015). Stephen’s video game audio has been nominated for two Motion Picture Sound Editors ‘Golden Reel’ Awards, a Jerry Goldsmith Award and two Game Audio Network Guild Awards.

Shift 2 Unleashed (Slightly Mad/EA, 2011) is a video racing game and its unique hybrid score renegotiates and re-imagines ‘chart-topping’ anthemic rock songs from ten bands in the US, Canada and the UK. The score fuses contemporary cinematic orchestral language, distorted electronica and pioneering post-production and sound design techniques. The score’s primary objective and function is, unusually for the racing game genre, a narrative one – seeking as it does to both describe the ‘real’ racing driver’s emotional and psychological journey and by representing and enhancing the concomitant experiences of the game player. The score operates by consciously referencing the musical, orchestrational and productional vocabulary and values from key Hollywood film and trailer genres, and these vocabularies inform and guide the transformation of the songs into fully-fledged cinematic musical productions. The game player then identifies emotionally with the music via the well understood processes of associative reception and reader response. The compositional process involved deconstructing the original recorded song ‘stems’ provided by record companies, identifying and isolating iconic song ‘fingerprints’ (quintessential elements of musical material that would permit the original song, like the written-over layer in a palimpsest, to be partially audible when the transformation was complete), and subjecting the resultant material to a variety of operations. These musical operations included reharmonisation, reorchestration (from vocals, guitars and drums, to full orchestra plus electronic sound sources and sound design elements), metrical reframing, tempo alteration, resynthesis and resampling, and genre transformation. As a composer, audio director and sound designer of the game, I will report from first-hand experience. The first section of the chapter will explore the underlying aesthetic objectives of the music and its compositional processes; the second section will give unique insights into the commercial tensions inherent in the production of a AAA game franchise and their impact on creative musical interventions; and the closing section will examine how the score functions as a cohesive, unifying and immersive force governing the player’s emotional responses.

Mark Sweeney completed his D.Phil. thesis (entitled “The Aesthetics of Videogame Music”) in musicology at Hertford College, Oxford. His primary research interest is on aesthetic theory and video game music. He was previously lecturer in music at St Catherine’s College, Oxford.

This chapter is concerned with a particular set of relationships between video game music, film music, and modernist avant-garde music. In a case study of Jason Grave’s soundtrack for the third-person science fiction survival horror games series, Dead Space, (EA, 2008, 2011, 2013). I delineate a particular soundworld and trace its origins back, via Hollywood, to the aleatory avant-garde music prevalent since the 1950s. For a horror game, Graves unsurprisingly drew on the rich heritage of science fiction and horror film scores. However, he also studied the works of Polish avant-garde composers Krzysztof Penderecki, Witold Lutosławski and Henryk Górecki. Throughout its disparate history, this particular soundworld has always been defined as ‘other’ to the security provided by the Western tonal tradition as characterised/caricaturised in neo-romantic film scores—to which two of these composers, Penderecki and Górecki, nevertheless significantly ‘reverted’. While it was in the context of Hollywood’s science fiction and horror films that this aesthetic solidified its ‘scary’ semantic associations, I argue that games like Dead Space and its sequels have more in common with the aesthetic paradigm’s original intentions. This situation is particularly ironic given the wider avant-garde’s often dismissive attitude towards mass culture. In Dead Space, the player’s character, Isaac Clarke, is a silent protagonist—he does not speak. Given the widespread acknowledgement of the importance of ‘immersion’ as a primary goal for the video game medium, this narrative device is of particular interest as it supports a symbiotic relationship between player and avatar. The acoustic void left by Isaac’s silence is filled by both music and sound effects—sonically, sometimes indistinguishable from one another. The blurriness of this distinction problematizes the distance between sound effects and music, and goes hand-in-hand with Graves’s research into avant-garde aleatory experimentation. Furthermore, the use of a dynamic music system that (re-)composes the soundtrack in real-time to fit the action on screen both supports and negates various aleatory principles.

William Gibbons is Assistant Professor of Musicology at Texas Christian University.
He is the author of Building the Operatic Museum: Eighteenth-Century Opera in Fin-deSiècleParis (2013) and co-editor of Music in Video Games: Studying Play (2014). He
is currently completing a new monograph, Unlimited Replays: The Art of Classical
Music in Video Games.

Though they differ significantly in terms of game design, the console role-playing game Eternal Sonata (Tri-Crescendo/Namco, 2008) and the mobile rhythm game Frederic: Resurrection of Music (Forever Entertainment, 2012) feature the same unlikely protagonist: nineteenth-century composer Frédéric Chopin. Given the non-linear narratives typically required of modern games, portrayal of real-life figures as major characters in video games remains relatively uncommon. Thus despite Chopin’s central role in both games, neither could be considered a ‘biogame’ analogous to film’s ‘biopics’; they simultaneously appease the narrative demands of gaming and appeal to audiences not normally oriented toward Classical music by featuring a stylized version of Chopin extracted from his historical context and juxtaposed with a more typical video-game environment. Chopin’s music features as prominently as the composer himself in both Frederic and Eternal Sonata, albeit in strikingly different settings. Much of the music in Eternal Sonata is newly composed, and makes no obvious reference to Chopin’s oeuvre. Breaks between the game’s ‘chapters,’ however, feature full-length performances of popular Chopin pieces played in a traditional manner by well-known Chopin interpreter Stanislav Bunin, visually accompanied by digital slideshows of relevant real-world locations and biographical narratives about the composer. Eternal Sonata’s treatment of Chopin’s works as objects of reverence and contemplation create moments that break free of the game’s narrative conceits, juxtaposing real and fictional worlds and framing the story as mise en abyme–videogaming as allegorical biography. The soundtrack to Frederic, by contrast, includes mostly electronic dance-style remixes of Chopin’s music, drastically altering the original pieces to fit into this new genre. Despite these significant differences in presentation, in both Frederic and Eternal Sonata the music forms part of a large-scale stylistic juxtaposition. The games both implicitly and explicitly revel in the fundamental contradictions: ‘high’ and ‘low’ art: classical and popular musics; artistry and play; reality and fiction; and art as opposed to gaming. These ‘high-concept’ contradictions—ironically arising in what has been called a fundamentally unartistic medium—call into question the very idea of art in a postmodern era. Furthermore, Eternal Sonata and Frederic highlight the implications of pre-existing classical music as an aural and conceptual element of recent games writ large.

Michiel Kamp is Junior Assistant Professor of Music at Utrecht University, where he specializes in teaching music in film and digital media music. Michiel completed his AHRC-funded Ph.D. in Music at Cambridge University with his thesis, ‘Four ways of hearing video game music’, which focused on phenomenological approaches to listening. As well as presenting his research at international conferences, he has also written for journals including Philosophy and Technology. He has co-edited the forthcoming special issue on video game music for The Soundtrack.

Royal Holloway

Tim Summers is a Teaching Fellow in Music at Royal Holloway, University of London. He has written on music in modern popular culture for journals including the Journal of the Royal Musical Association, Journal of Film Music and Music, Sound and the Moving Image. He is the author of Understanding Video Game Music (2016).

Mark Sweeney completed his D.Phil. thesis (entitled “The Aesthetics of Videogame Music”) in musicology at Hertford College, Oxford. His primary research interest is on aesthetic theory and video game music. He was previously lecturer in music at St Catherine’s College, Oxford.

ISBN-13 (Hardback)

9781781791974

Price (Hardback)

£75.00 / $100.00

ISBN-13 (Paperback)

9781781791981

Price (Paperback)

£22.95 / $29.95

ISBN (eBook)

9781781794388

Price (eBook)

Individual£22.95 / $29.95Institutional£75.00 / $100.00

Publication

01/07/2016

Pages

240

Size

234 x 156mm

Readership

scholars and students

Illustration

15 figures

Reviews

What my rapid review of these remarkable articles reveals is that music is crucial for games and not merely a trivial and minor aspect. It is modular, dynamic, interactive, narrative, historic, avant-garde, popular, creative, educational and many more things. If you would like further to explore these multiple directions, I recommend both of these inspiring volumes.Popular Music (review of this book and another on a similar subject)

By underlining key issues, grappling with pertinent terms, outlining specific methodological tools, and contemplating core critical issues, this publication represents an admirable attempt to highlight not just the necessity of video game music studies as a field of scholarly enquiry, but so too the surprising level of maturity that it has already achieved in its relatively short lifetime.IASPM Journal

We may use cookies to collect information about your computer, including where available your IP address, operating system and browser type, for system administration and to report aggregate information for our internal use. Find out more.