ACADEMICS

Stephen is a full-time PhD student at Royal Holloway, University of London. His PhD research is on sound and music in virtual reality. He puts the user first by taking an experiential approach to analysing sound design in virtual reality.

Stephen presents frequently at conferences on topics of audiovisual interactive media, virtual and augmented realities, game design and other topics of interest. He is also the volunteer European staff member on the Ludomusicology Discord Group, which focuses on the study of sound and music in play.

Chapters, articles and journal submissions are available for download from this website when possible.

Available for discussion:

e-mail: stephen.tatlow.2016@live.rhul.ac.uk

discord: mediaboy#0001

Everyone in Space Wants to Hear You Scream: Towards a Framework for Understanding Player Voice in Virtual Worlds
Journal of Sound and Music in Games (2020) 1 (3): 15–34.

When considering player voice in the context of game sound, existing examinations remain inconclusive. As player voice exists in a liminal position between reality and virtuality, some academics see them as sonic violations of the game space. Voice can convey information about identity, which may oppose our understanding of the avatars within the game world. Voice can facilitate social communication, which may remind us of the physical world outside the virtuality. Mediations of voice into the virtual world may introduce obstacles or inflections that interfere with our enjoyment of the virtual space. Alongside these concerns, however, we can also find virtual worlds that prioritize and privilege player voice. Player voice can become part of character identity. Gameworlds can encourage us to communicate ludically, without disrupting immersion. Interruptions and disruptions can be limited by players.

Amongst others, the virtual world of the long-running MMORPG EVE Online demonstrates how voice can coexist with immersion. Marketing materials for the game now place player voice at the center of consumer focus. Including an interview with one of the videographers who placed player voice at the center of his fan videos, the article uses EVE Online as a case study for the integration of player voice into virtual worlds.

By examining virtual worlds and the role of voice within them, this article develops a framework for understanding player voice in the context of game sound. This allows us to recognize how player voice, an often overlooked aspect of game sound, can function within virtual worlds.

Speak of the Devil: Player voice in video game marketing
Ludomusicology 2020, virtual conference

Popular reception to video game trailers is often at its most polarised when marketing campaigns attempt to capitalise on the social and emotional power of player voice. Voice-led trailers for Anthem (Bioware, 2018), EVE Online (CCP Games, 2003), The Division (Ubisoft, 2016), Rainbow Six Siege (Ubisoft, 2014) and others have been released as part of video game marketing campaigns. By using (pseudo-)player voice in video game trailers, marketing agencies attempt to demonstrate the social experience of their multiplayer games and perhaps even authenticate their trailer as being representative of real play.

Whilst some designers do manage to integrate idealised player speech cohesively into video game trailers, many fall flat: player reception to voice-led video trailers is often muted at best. In some instances, player reception has been completely contrary, with reimagined remixes offering criticism of both the marketing campaign and the development studios and publishers. Often, it seems that players find voice-led trailers to be “cringeworthy”, inaccurate or otherwise non-representative of their player experience. This demonstrates that players and game studios often seem to have different perceptions of player voice within video games.

This paper explores several trailers and their audience response, identifying what makes voice-led marketing material receive positive feedback from the intended audience. This allows us to suggest how players may envisage voice within their play experiences.
 

The latest version was uploaded on 27th April 2019

Halo: Transcription Evolved - Overcoming Difficulties in Transcribing Modular Video Game Score
Ludomusicology 2019, Leeds Beckett University
 

Halo: Combat Evolved utilises non-linear music to accompany narrative. Music is generated using randomised, algorithmic and interactive, real-time processes. However, transcriptions of the score typically portray a single simplistic variant of this infinitely variable music. As a result, these transcribed portrayals of the score fail to accurately present the working practices and musical design of the composer and sound designer Marty O’Donnell. Relationships between music and interactive content are specifically lost, as is the algorithmic nature of the perceived musical score within the game. To account for these issues, this score can be represented as a series of layers and loops in a modular score similar to that outlined by Medina-Grey.

 

However, even with an understanding of how we could represent the score, the transcription process for Halo: Combat Evolved does not become straightforward. Investigating the “black box” of the video game world through play poses a number of issues, both specifically to Halo and generally to all video games. Amongst the most obvious and frustrating issues: music cannot be fully isolated within the game engine without cracks or hacks, sound layers are mixed ‘automagically’ by the sound engine and gameplay triggers for loops and layers are inconsistent. Elements of randomness combine with contradictory information about the music and sound of the game to cause greater difficulties in identifying loops and layers.


These difficulties are not unique to Halo: Combat Evolved. Through an exploration of the process of transcribing the well-recognised music of Halo, this paper demonstrates and suggests applicable methods such as waveform manipulation & analysis, gameplay routing and game engine manipulation that can be applied when examining video game music through “black-box” recordings.

 

Author's Note: These slides were first presented at Ludomusicology 2019, Leeds Beckett.
 

Madness Effect: Audiating Mental Instability
Constructing the Moving Image, RMA Study Day, Huddersfield University, 2019

In Bioware's Mass Effect franchise, the effects of war cause characters to develop poor mental health. Characters begin to hallucinate visually and audially, become obsessional, suicidal, experience nightmares and poor sleep, or express other symptoms of poor mental health. Some of these symptoms are the result of “indoctrination” by the antagonists of the game, whereby they mentally compromise key figures within the game’s narrative in order to further their agenda. In the final installment of the original trilogy, the player character also begins to experience symptoms of poor mental health. This allows us to explore how episodes of mental instability are demarcated throughout the Mass Effect franchise, particularly through changes to the franchise’s established conventions of scoring and sound design, and investigate how Bioware attempt to communicate the player character’s deteriorating mental health to the player.

The latest version was uploaded on 8th May 2018

Sound Knowledge: Replacing Graphical User Interfaces in PAYDAY 2
Ludomusicology, HMT Leipzig, 2018

Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs  are replaced with auditory interfaces.

This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and  sound to build an understanding of how players must interact with the audio of the game. 

To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.
 

 

Author's Note: These slides were first presented at Ludomusicology 2018, Leipzig. The version uploaded to this website is a "standalone" version of the presentation that is intended to provide the same information without the need for my presentation notes.