This page contains information about on-going and historical academic activity. This includes conference talks, symposium participation, roundtables, interviews, etc.
To request further details, copies of presentations or other materials, please contact using details beneath:
email 2: firstname.lastname@example.org
Academic Conferences & Symposiums
‘The Old School Groove: Relationships with Nostalgia in the Music of RuneScape’. Ludomusicology 2022, Royal Holloway, University of London (April 2022); “Press Start” Symposium, University of North Texas (March 2022).
In 2013, game developer Jagex released a “new” title: Old School RuneScape (OSRS). Based on a 2007 backup of their award-winning MMORPG RuneScape, OSRS aimed to capitalise on feelings of nostalgia amongst current and former players. Since it’s release, the game has released content based on the modern RuneScape game alongside original content designed to fit within the “retro” OSRS game. Similarly, music within OSRS can be seen as a combination of restored 2007 music, “unmastered” music from RuneScape and new music for OSRS content. The resulting mixture of inspirations and approaches poses challenges to the concept of nostalgia within the OSRS soundtrack.
Technical limitations present from 2003-2007 placed barriers between musicians and composition in the original RuneScape game. These restrictions predominantly exist artificially within OSRS and, on some occasions, have been ignored: sound libraries have been changed, new instruments sampled, and audio hardware/software significantly improved. Alongside changes to audio implementation, challenges to the “authenticity” of the soundtrack can be found: new tracks have been added, including tracks which supplement or entirely replace music found in the original “nostalgic” release, and player experiences of music have been altered through music engine updates. Compositional approaches have also changed significantly, with new composers finding creative solutions to overcome any remaining artificial restrictions.
Despite these changes clearly challenging the nostalgic recreation at the heart of OSRS, the fan reactions to changes within and around the music has been generally positive. An understanding of how the fans visualise the “retro” aesthetic of OSRS can be gained considering issues raised and discussed within the community: how do fans situate their sense of “nostalgia” in the context of a living, changing game? This offers broader contextualisation of questions surrounding recreating audio for “retro” remakes.
‘The God Complex: Creating, Managing, and Moderating Online Communities’. Intimacy, Communities and Identities: Digital Platforms and Network Cultures, Chinese University of Hong Kong (December 2021)
Moderating online communities is a form of censorship which is not always well understood. Whilst the role of the moderator is to prohibit ‘unacceptable’ content – a process which is unashamedly non-democratic – the goal is often, to a degree, selfless: the moderator intends to protect the community to enable it to survive and/or grow in perpetuity.
Based on personal experiences moderating online spaces with millions of unique monthly visitors and managing online communities with thousands of daily-active players, the paper explores three critical experiences of moderating online discussion in a gaming group revitalised online throughout the COVID-19 crisis: a new member whose English language skills were gained through listening to American rap music; the discovery that a member held views functionally incompatible with contemporary Western values on women’s rights; management of conflict arising when an older community with a different culture merged into our group.
Through understanding our approaches to these issues, insights can be found into why many online gaming communities do not challenge hate speech or other inappropriate behaviour. It presents successful strategies we adopted to overcome these challenges, and acknowledges the lessons learnt when we failed. Is there a way to modernise approaches to internet content moderation that will allow moderators to challenge misinformation, mitigate radicalisation, and answer new waves of socio-political extremism online?
‘Playback (Only?) As Intended: Reflections on Research into the Music of Final Fantasy XIV’. North American Conference of Video Game Music 2021, virtual conference (July 2021)
How do the ludic and social aspects of play interact during research into multiplayer games and how might this alter analyses of the game? This paper reflects on the specific difficulties of engaging with research into Massive Multiplayer Online Role-Playing Games (MMORPGs) and explores several different methods for research-play in a multiplayer environment based on research undertaken between 2018 and 2020 into the music of the MMORPG Final Fantasy XIV: A Realm Reborn.
Music in Final Fantasy XIV: A Realm Reborn changes during collaborative play experiences, which alters the social experience of the game. As the research had a specifically multiplayer focus, questions surrounding methodology arose: how can the same play event be observed from multiple perspectives simultaneously? A variety of approaches were tried: utilisation of international gaming communities, participation in game-specific communities, “multi-boxing”, LAN-style multiplayer experiences and others. These led to sufficient success required for the completion of the research, but each method had implications for how the game music was received. Players of MMORPGs engage with the game in substantially different ways and are not always conscious of how this affects the musical experience of the game: in what ways could this research be considered ethnographic, or considered hermeneutical? Do these terms have specific value within the study of multiplayer games, where community involvement may be essential to understanding how players perceive and interpret the game-text?
‘Bridging the Gap? Obstacles to Higher Music Education in the UK’. Midwest Graduate Music Consortium 2021: Timely Conversations, University of Michigan (April 2021); Symposium on the Arts and Decolonisation, Royal Holloway University of London (February 2021)
Whilst indications of progress are present in the fight to minimise discrimination in university admissions in the UK, a 2020 quantitative analysis of gender demographics for Higher Music Education (HME) courses in the UK between 2014 and 2020 revealed a significant gender divide in some areas. ‘Traditional’ academic music degrees and degrees combining music and theatre (e.g., Musical Theatre) have predominantly female populations, whilst degrees combining music and technology and degrees covering popular music or popular music performance have predominantly male populations. These are indications of a continued and substantial gender gap which must be bridged in future access and inclusivity initiatives.
To identify potential areas for impactful work, a qualitative survey of advertised entry requirements for undergraduate degrees matriculating in 2021 in music-related subjects at universities in the UK was conducted. This revealed five main types of entrance requirements for HME in the UK: academic qualifications; non-academic qualifications; artistic requirements demonstrated through creative portfolio, interview or audition; further study requirements foundation years or additional study years; other informal requirements.
By examining existing data surrounding these pathways, three key questions can be answered: what obstacles to access can be observed in this study? What difficulties do those working in the UK face as they to challenge these obstacles? What further data is required to maximise the impact of diversity initiatives?
‘Speak of the Devil: (pseudo-)player voice in video game marketing’. Ludomusicology 2020, virtual conference (April 2020)
Popular reception to video game trailers is often at its most polarised when marketing campaigns attempt to capitalise on the social and emotional power of player voice. Voice-led trailers for Anthem (Bioware, 2018), EVE Online (CCP Games, 2003), The Division (Ubisoft, 2016), Rainbow Six Siege (Ubisoft, 2014) and others have been released as part of video game marketing campaigns. By using (pseudo-)player voice in video game trailers, marketing agencies attempt to demonstrate the social experience of their multiplayer games and perhaps even authenticate their trailer as being representative of real play.
Whilst some designers do manage to integrate idealised player speech cohesively into video game trailers, many fall flat: player reception to voice-led video trailers is often muted at best. In some instances, player reception has been completely contrary, with reimagined remixes offering criticism of both the marketing campaign and the development studios and publishers. Often, it seems that players find voice-led trailers to be “cringeworthy”, inaccurate or otherwise non-representative of their player experience. This demonstrates that players and game studios often seem to have different perceptions of player voice within video games.
This paper explores several trailers and their audience response, identifying what makes voice-led marketing material receive positive feedback from the intended audience. This allows us to suggest how players may envisage voice within their play experiences.
‘Halo: Transcription Evolved – Overcoming issues in Transcribing Modular Video Game Score’. Ludomusicology 2019, Leeds Beckett University (April 2019)
Halo: Combat Evolved utilises non-linear music to accompany narrative. Music is generated using randomised, algorithmic and interactive, real-time processes. However, transcriptions of the score typically portray a single simplistic variant of this infinitely variable music. As a result, these transcribed portrayals of the score fail to accurately present the working practices and musical design of the composer and sound designer Marty O’Donnell. Relationships between music and interactive content are specifically lost, as is the algorithmic nature of the perceived musical score within the game. To account for these issues, this score can be represented as a series of layers and loops in a modular score similar to that outlined by Medina-Grey.
However, even with an understanding of how we could represent the score, the transcription process for Halo: Combat Evolved does not become straightforward. Investigating the “black box” of the video game world through play poses a number of issues, both specifically to Halo and generally to all video games. Amongst the most obvious and frustrating issues: music cannot be fully isolated within the game engine without cracks or hacks, sound layers are mixed ‘automagically’ by the sound engine and gameplay triggers for loops and layers are inconsistent. Elements of randomness combine with contradictory information about the music and sound of the game to cause greater difficulties in identifying loops and layers.
These difficulties are not unique to Halo: Combat Evolved. Through an exploration of the process of transcribing the well-recognised music of Halo, this paper demonstrates and suggests applicable methods such as waveform manipulation & analysis, gameplay routing and game engine manipulation that can be applied when examining video game music through “black-box” recordings.
‘Madness Effect: Audiating Mental Instability’. Constructing the Moving Image (RMA Study Day), Huddersfield University (February 2019)
In Bioware's Mass Effect franchise, the effects of war cause characters to develop poor mental health. Characters begin to hallucinate visually and audially, become obsessional, suicidal, experience nightmares and poor sleep, or express other symptoms of poor mental health. Some of these symptoms are the result of “indoctrination” by the antagonists of the game, whereby they mentally compromise key figures within the game’s narrative in order to further their agenda. In the final installment of the original trilogy, the player character also begins to experience symptoms of poor mental health. This allows us to explore how episodes of mental instability are demarcated throughout the Mass Effect franchise, particularly through changes to the franchise’s established conventions of scoring and sound design, and investigate how Bioware attempt to communicate the player character’s deteriorating mental health to the player.
‘Replacing Graphical User Interfaces in PAYDAY 2’. Ludomusicology 2018, HMT Leipzig (April 2018)
Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs are replaced with auditory interfaces.
This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and sound to build an understanding of how players must interact with the audio of the game.
To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.