top of page


My name is Stephen Tatlow

I am in the final stages of my Ph.D. at Royal Holloway, University of London, where I conducted research into user experiences of sound and music in virtual reality technologies. Transitioning to research and analysis from a prior career in professional music, I draw on both quantitative and qualitative methods to facilitate a detail-oriented approach. I have published research on a wide variety of topics (including music education, digital communities, virtual reality, and sound and music for game) in peer-reviewed journals and presented on these topics at conferences in Europe, North America, and Hong Kong. I believe that expertise is a process, not a destination, and continuously work to maintain my advanced knowledge on topics of importance and interest.

Resume is available at request:



Ph.D. Music & Media Arts
Royal Holloway, University of London (2019–2023)

90,000-word thesis titled "Experiential Perspectives in Sound and Music for Virtual Reality Technologies" submitted to examiners in March 2023

M.Mus. in Advanced Musical Studies (Merit)
Royal Holloway, University of London (2016–2017)

Specialising in cultural and musical analysis, specifically examining music and sound for screen and game, and 20th- and 21st- century music. 

Completed thesis titled: "In space, everyone wants to hear you scream: game audio in EVE Online and other video games"

B.Mus. (Hons) Music (2:1)
Goldsmiths, University of London (2012–2015)

Broad academic degree covering aspects of musical analysis, composition and performance in historical and contemporary genres, including music for screen, and music technology.


Completed dissertation titled: "Scoring the Robot: Musical Approaches to Empathy and Relationships in Pixar and WALL-E"


Professional Musician (2012–present)
  • Facilitate musical activities for charitable and educational organisations throughout the UK including providing consultations on all music matters such as licensing, selecting and purchasing resources, retaining professional musicians, developing music programs, and other relevant tasks.

  • Teach one-to-one and group lessons to school and college students, adapting teaching style to effectively engage different audiences.

  • Develop accessibility and inclusivity programs providing music lessons for disadvantaged children by advertising and teaching on these initiatives

  • Direct music ensembles, theatre productions, recordings, and other performances.

  • Produced asynchronous audio-visual performances during COVID-19 using ProTools and Adobe Premiere Pro.

Seminar/Workshop Leader

Royal Holloway, University of London (2020–2023)

  • Assisted in the teaching of first-year and second-year Music courses through leading seminars, contributing to lectures, and marking formative and summative assessments.

Projects & Initiatives

Peer Reviewer, Journal of Sound and Music in Games (on-going)

  • Use specialist knowledge to assess the validity and suitability of submissions for publication.

  • Ensure quality standards for the journal are met by recommending alterations to all aspects of the publication for authors and editors as necessary.

Administrator for the Ludomusicology Online Research Community (2020–present)
  • Manage an online research community centred on sound and music for game with nearly 500 members.

  • Initiate partnerships with other organisations within the field, including major international partners such as the Society for the Study of Sound and Music in Games.

  • Facilitate online conferences and symposiums with up to 100 participants by creating interest in events, distributing press materials, managing access permissions, moderating discussions, and providing technical assistance to speakers and participants.

SSSMG Bibliography Rework (June 2022)
  • Managed major changes to a public bibliographic resource hosted by The Society for the Study of Sound and Music in Games (an international research society).

  • Improved workflow for editors and users through exporting bibliography from Excel to Zotero, significantly decreasing time requirements for adding or changing entries and increasing the ease of integration into writing and publication workflows for users.

  • Identified areas and established frameworks for future expansion of the resource into open access bibliographic repositories and foreign language bibliographies.

Ludomusicology 2022 Conference (April 2022)
  • Assisted in organising an international conference at Royal Holloway, University of London.

  • Managed hybrid/online spaces on Microsoft Teams and Discord and provided technical support for in-person and remote presenters and attendees.

  • Chaired conference sessions by introducing speakers and topics, and facilitating Q&As.


‘Authenticity in Sound Design for Virtual Reality’. In History as Fantasy, edited by James Cook, Alexander Kolassa, Alex Robinson and Adam Whittaker. Abingdon: Routledge, 2023.

Virtual reality technology offers new opportunities for the creation of immersive historical experiences. With these new opportunities comes new problems. One potentially problematic aspect of immersion within virtual reality is authenticity: the user’s belief that the virtuality they perceive would exist in reality. This chapter examines challenges to authenticity within the sound design of virtual reality environments. These challenges emerge from technical considerations such as technological mediatisation of sound and practical considerations such as audience subjectivity. Drawing on research into sound design, sound technology, historical recreation and other related topics, possible ways to negotiate authenticity within virtual reality are suggested and reasonable expectations for authentic sound design in virtual reality are established.

‘Exploring issues in the categorisation of HME courses through FOI surveys of gender demographics in UK higher education institutions’. British Journal of Music Education, 2022.

A common conclusion drawn from publicly available Higher Education Statistics Agency (HESA) data releases is that Higher Music Education (HME) courses have a predominantly male population. However, HESA data has key issues when examining HME courses: which courses are reported as ‘music’ courses to HESA; how do universities decide which courses are ‘music’ courses; how many different topics are contained within the umbrella of ‘music’ courses? To address questions of gender representation in HME, universities in the UK were approached with FOIA requests for the gender demographics of student populations on “music-related” courses.

Information was gained on 3456 courses populations between 2014 and 2020, which was categorised by subject of study. Six core undergraduate topics were identified: generic music degrees (female bias), degrees combining music and technology (male bias), degrees combining music and business (no gender bias), degrees on popular music (male bias), degrees combining music and theatre (female bias) and major conservatoire courses (no gender bias). No area was accurately represented by HESA data, and gender representation varied significantly between areas. These findings have implications for discussions of gender representation in HME across the UK.

‘Music and Narrative Experience in Final Fantasy XIV: A Realm Reborn’. In The Music of Nobuo Uematsu in the Final Fantasy Series, edited by Richard Anatone, 206–235. Bristol: Intellect, 2022.

Collection abstract: Japanese musician and composer Nobuo Uematsu has built his career and reputation on his soundtracks to the enduring Final Fantasy video game series, which are notable for their remarkable cinematic feel. Today Uematsu is one of Japan’s most beloved living composers, credited with inspiring a new generation of classical music fans. This volume, the first book-length study of the music of Uematsu, takes a variety of different analytical approaches to his body of work. It offers readers interested in ludomusicology—the study of and research into video game music—a variety of ways with which to understand Uematsu’s compositional process and the role that video game music has in the overall gaming experience.

Stephen Tatlow's contribution can be found as Ch.8 of this collection.

Summers, Tim, James Cook, Will Famer, Elisa Raffaella Ferrè, Lucy Harrison, Rich Hemming, Andra Ivănescu, Luke Reed, Flossie Roberts, Richard Stevens, Stephen Tatlow, Laryssa Whittaker. ‘Music and Sound in Virtual/Augmented Realities—Questions, Challenges and Approaches: A Multidisciplinary Roundtable’. Journal of Sound and Music in Games, 2, no. 2 (1 April 2021): 63–83.

The mid-to-late 2010s saw a renewed interest in virtual reality technologies with the release of a wide selection of consumer VR headsets and glasses, and the increased power of smartphones to provide augmented reality experiences. While research on sound and music in these technologies has started to gather pace, practice and research continue to grapple with significant creative and practical questions. In addition, such discussions have tended to be limited within disciplinary or professional silos. The roundtable presented here was an effort to begin conversations across a variety of fields of research and practice. We shared perspectives and exchanged views informed by different disciplinary traditions and experiences. We also sought to identify key questions and issues regarding music and sound in VR/AR. Three main themes are presented here: 1) Spaces and musical performances, 2) Realities and realism, and 3) Movement, orientation, and disorientation.

Tatlow, Stephen. ‘Everyone in Space Wants to Hear You Scream: Toward a Framework for Understanding Player Voice in Virtual Worlds’. Journal of Sound and Music in Games, 1, no. 3 (1 July 2020): 15–34.

When considering player voice in the context of game sound, existing examinations remain inconclusive. As player voice exists in a liminal position between reality and virtuality, some academics see them as sonic violations of the game space. Voice can convey information about identity, which may oppose our understanding of the avatars within the game world. Voice can facilitate social communication, which may remind us of the physical world outside the virtuality. Mediations of voice into the virtual world may introduce obstacles or inflections that interfere with our enjoyment of the virtual space. Alongside these concerns, however, we can also find virtual worlds that prioritize and privilege player voice. Player voice can become part of character identity. Gameworlds can encourage us to communicate ludically, without disrupting immersion. Interruptions and disruptions can be limited by players.

Amongst others, the virtual world of the long-running MMORPG EVE Online demonstrates how voice can coexist with immersion. Marketing materials for the game now place player voice at the center of consumer focus. Including an interview with one of the videographers who placed player voice at the center of his fan videos, the article uses EVE Online as a case study for the integration of player voice into virtual worlds.

By examining virtual worlds and the role of voice within them, this article develops a framework for understanding player voice in the context of game sound. This allows us to recognize how player voice, an often overlooked aspect of game sound, can function within virtual worlds.


‘The Old School Groove: Relationships with Nostalgia in the Music of RuneScape’. Ludomusicology 2022, Royal Holloway, University of London (April 2022); “Press Start” Symposium, University of North Texas (March 2022).

In 2013, game developer Jagex released a “new” title: Old School RuneScape (OSRS). Based on a 2007 backup of their award-winning MMORPG RuneScape, OSRS aimed to capitalise on feelings of nostalgia amongst current and former players. Since it’s release, the game has released content based on the modern RuneScape game alongside original content designed to fit within the “retro” OSRS game. Similarly, music within OSRS can be seen as a combination of restored 2007 music, “unmastered” music from RuneScape and new music for OSRS content. The resulting mixture of inspirations and approaches poses challenges to the concept of nostalgia within the OSRS soundtrack.

Technical limitations present from 2003-2007 placed barriers between musicians and composition in the original RuneScape game. These restrictions predominantly exist artificially within OSRS and, on some occasions, have been ignored: sound libraries have been changed, new instruments sampled, and audio hardware/software significantly improved. Alongside changes to audio implementation, challenges to the “authenticity” of the soundtrack can be found: new tracks have been added, including tracks which supplement or entirely replace music found in the original “nostalgic” release, and player experiences of music have been altered through music engine updates. Compositional approaches have also changed significantly, with new composers finding creative solutions to overcome any remaining artificial restrictions.

Despite these changes clearly challenging the nostalgic recreation at the heart of OSRS, the fan reactions to changes within and around the music has been generally positive. An understanding of how the fans visualise the “retro” aesthetic of OSRS  can be gained considering issues raised and discussed within the community: how do fans situate their sense of “nostalgia” in the context of a living, changing game? This offers broader contextualisation of questions surrounding recreating audio for “retro” remakes.

‘The God Complex: Creating, Managing, and Moderating Online Communities’. Intimacy, Communities and Identities: Digital Platforms and Network Cultures, Chinese University of Hong Kong (December 2021)

Moderating online communities is a form of censorship which is not always well understood. Whilst the role of the moderator is to prohibit ‘unacceptable’ content – a process which is unashamedly non-democratic – the goal is often, to a degree, selfless: the moderator intends to protect the community to enable it to survive and/or grow in perpetuity.

Based on personal experiences moderating online spaces with millions of unique monthly visitors and managing online communities with thousands of daily-active players, the paper explores three critical experiences of moderating online discussion in a gaming group revitalised online throughout the COVID-19 crisis: a new member whose English language skills were gained through listening to American rap music; the discovery that a member held views functionally incompatible with contemporary Western values on women’s rights; management of conflict arising when an older community with a different culture merged into our group. 

Through understanding our approaches to these issues, insights can be found into why many online gaming communities do not challenge hate speech or other inappropriate behaviour. It presents successful strategies we adopted to overcome these challenges, and acknowledges the lessons learnt when we failed. Is there a way to modernise approaches to internet content moderation that will allow moderators to challenge misinformation, mitigate radicalisation, and answer new waves of socio-political extremism online?

‘Playback (Only?) As Intended: Reflections on Research into the Music of Final Fantasy XIV’. North American Conference of Video Game Music 2021, virtual conference (July 2021)

How do the ludic and social aspects of play interact during research into multiplayer games and how might this alter analyses of the game? This paper reflects on the specific difficulties of engaging with research into Massive Multiplayer Online Role-Playing Games (MMORPGs) and explores several different methods for research-play in a multiplayer environment based on research undertaken between 2018 and 2020 into the music of the MMORPG Final Fantasy XIV: A Realm Reborn.

Music in Final Fantasy XIV: A Realm Reborn changes during collaborative play experiences, which alters the social experience of the game. As the research had a specifically multiplayer focus, questions surrounding methodology arose: how can the same play event be observed from multiple perspectives simultaneously?  A variety of approaches were tried:  utilisation of international gaming communities, participation in game-specific communities, “multi-boxing”, LAN-style multiplayer experiences and others. These led to sufficient success required for the completion of the research, but each method had implications for how the game music was received. Players of MMORPGs engage with the game in substantially different ways and are not always conscious of how this affects the musical experience of the game: in what ways could this research be considered ethnographic, or considered hermeneutical? Do these terms have specific value within the study of multiplayer games, where community involvement may be essential to understanding how players perceive and interpret the game-text? 

‘Bridging the Gap? Obstacles to Higher Music Education in the UK’. Midwest Graduate Music Consortium 2021: Timely Conversations, University of Michigan (April 2021); Symposium on the Arts and Decolonisation, Royal Holloway University of London (February 2021)

Whilst indications of progress are present in the fight to minimise discrimination in university admissions in the UK, a 2020 quantitative analysis of gender demographics for Higher Music Education (HME) courses in the UK between 2014 and 2020 revealed a significant gender divide in some areas. ‘Traditional’ academic music degrees and degrees combining music and theatre (e.g., Musical Theatre) have predominantly female populations, whilst degrees combining music and technology and degrees covering popular music or popular music performance have predominantly male populations. These are indications of a continued and substantial gender gap which must be bridged in future access and inclusivity initiatives. 

To identify potential areas for impactful work, a qualitative survey of advertised entry requirements for undergraduate degrees matriculating in 2021 in music-related subjects at universities in the UK was conducted. This revealed five main types of entrance requirements for HME in the UK: academic qualifications; non-academic qualifications; artistic requirements demonstrated through creative portfolio, interview or audition; further study requirements foundation years or additional study years; other informal requirements. 

By examining existing data surrounding these pathways, three key questions can be answered: what obstacles to access can be observed in this study? What difficulties do those working in the UK face as they to challenge these obstacles? What further data is required to maximise the impact of diversity initiatives?

‘Speak of the Devil: (Pseudo-)Player Voice In Video Game Marketing’. Ludomusicology 2020, virtual conference (April 2020)

Popular reception to video game trailers is often at its most polarised when marketing campaigns attempt to capitalise on the social and emotional power of player voice. Voice-led trailers for Anthem (Bioware, 2018), EVE Online (CCP Games, 2003), The Division (Ubisoft, 2016), Rainbow Six Siege (Ubisoft, 2014) and others have been released as part of video game marketing campaigns. By using (pseudo-)player voice in video game trailers, marketing agencies attempt to demonstrate the social experience of their multiplayer games and perhaps even authenticate their trailer as being representative of real play.

Whilst some designers do manage to integrate idealised player speech cohesively into video game trailers, many fall flat: player reception to voice-led video trailers is often muted at best. In some instances, player reception has been completely contrary, with reimagined remixes offering criticism of both the marketing campaign and the development studios and publishers. Often, it seems that players find voice-led trailers to be “cringeworthy”, inaccurate or otherwise non-representative of their player experience. This demonstrates that players and game studios often seem to have different perceptions of player voice within video games.

This paper explores several trailers and their audience response, identifying what makes voice-led marketing material receive positive feedback from the intended audience. This allows us to suggest how players may envisage voice within their play experiences.

‘Halo: Transcription Evolved – Overcoming issues in Transcribing Modular Video Game Score’. Ludomusicology 2019, Leeds Beckett University (April 2019)

Halo: Combat Evolved utilises non-linear music to accompany narrative. Music is generated using randomised, algorithmic and interactive, real-time processes. However, transcriptions of the score typically portray a single simplistic variant of this infinitely variable music. As a result, these transcribed portrayals of the score fail to accurately present the working practices and musical design of the composer and sound designer Marty O’Donnell. Relationships between music and interactive content are specifically lost, as is the algorithmic nature of the perceived musical score within the game. To account for these issues, this score can be represented as a series of layers and loops in a modular score similar to that outlined by Medina-Grey.

However, even with an understanding of how we could represent the score, the transcription process for Halo: Combat Evolved does not become straightforward. Investigating the “black box” of the video game world through play poses a number of issues, both specifically to Halo and generally to all video games. Amongst the most obvious and frustrating issues: music cannot be fully isolated within the game engine without cracks or hacks, sound layers are mixed ‘automagically’ by the sound engine and gameplay triggers for loops and layers are inconsistent. Elements of randomness combine with contradictory information about the music and sound of the game to cause greater difficulties in identifying loops and layers.

These difficulties are not unique to Halo: Combat Evolved. Through an exploration of the process of transcribing the well-recognised music of Halo, this paper demonstrates and suggests applicable methods such as waveform manipulation & analysis, gameplay routing and game engine manipulation that can be applied when examining video game music through “black-box” recordings.

‘Madness Effect: Audiating Mental Instability’. Constructing the Moving Image, Huddersfield University (February 2019)

In Bioware's Mass Effect franchise, the effects of war cause characters to develop poor mental health. Characters begin to hallucinate visually and audially, become obsessional, suicidal, experience nightmares and poor sleep, or express other symptoms of poor mental health. Some of these symptoms are the result of “indoctrination” by the antagonists of the game, whereby they mentally compromise key figures within the game’s narrative in order to further their agenda. In the final installment of the original trilogy, the player character also begins to experience symptoms of poor mental health. This allows us to explore how episodes of mental instability are demarcated throughout the Mass Effect franchise, particularly through changes to the franchise’s established conventions of scoring and sound design, and investigate how Bioware attempt to communicate the player character’s deteriorating mental health to the player.

‘Replacing Graphical User Interfaces in PAYDAY 2’. Ludomusicology 2018, HMT Leipzig (April 2018)

Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs  are replaced with auditory interfaces.

This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and  sound to build an understanding of how players must interact with the audio of the game. 

To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.

bottom of page