ACADEMICS

I became interested in audiovisual media as a research topic during my undergraduate studies at Goldsmiths, University of London. In 2017, at Royal Holloway, I discovered Ludomusicology - the study of sound and music in games. I completed his Masters in 2017, writing my thesis on relationships between voice and diegesis in video-game sound.

I currently conduct independent research, presenting at conferences on topics of audiovisual interactive media, virtual and augmented realities, game design and other topics of interest. I am the volunteer European staff member on the Ludomusicology Discord Group.

Chapters, articles and journals are uploaded to my website, for free, whenever possible.

I am available for discussion via email: stephen.tatlow@punkduckproductions.co.uk

The latest version was uploaded on 27th April 2019

Halo: Transcription Evolved - Overcoming Difficulties in Transcribing Modular Video Game Score

Halo: Combat Evolved utilises non-linear music to accompany narrative. Music is generated using randomised, algorithmic and interactive, real-time processes. However, transcriptions of the score typically portray a single simplistic variant of this infinitely variable music. As a result, these transcribed portrayals of the score fail to accurately present the working practices and musical design of the composer and sound designer Marty O’Donnell. Relationships between music and interactive content are specifically lost, as is the algorithmic nature of the perceived musical score within the game. To account for these issues, this score can be represented as a series of layers and loops in a modular score similar to that outlined by Medina-Grey.

 

However, even with an understanding of how we could represent the score, the transcription process for Halo: Combat Evolved does not become straightforward. Investigating the “black box” of the video game world through play poses a number of issues, both specifically to Halo and generally to all video games. Amongst the most obvious and frustrating issues: music cannot be fully isolated within the game engine without cracks or hacks, sound layers are mixed ‘automagically’ by the sound engine and gameplay triggers for loops and layers are inconsistent. Elements of randomness combine with contradictory information about the music and sound of the game to cause greater difficulties in identifying loops and layers.


These difficulties are not unique to Halo: Combat Evolved. Through an exploration of the process of transcribing the well-recognised music of Halo, this paper demonstrates and suggests applicable methods such as waveform manipulation & analysis, gameplay routing and game engine manipulation that can be applied when examining video game music through “black-box” recordings.

 

Author's Note: These slides were first presented at Ludomusicology 2019, Leeds Beckett.
 

The latest version was uploaded on 8th May 2018

Sound Knowledge: Replacing Graphical User Interfaces in PAYDAY 2

Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs  are replaced with auditory interfaces.

This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and  sound to build an understanding of how players must interact with the audio of the game. 

To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.
 

 

Author's Note: These slides were first presented at Ludomusicology 2018, Leipzig. The version uploaded to this website is a "standalone" version of the presentation that is intended to provide the same information without the need for my presentation notes.
 

The latest version was uploaded on 30th October 2017.

In space, everyone wants to hear you scream: player voice as game audio in EVE Online and other video games.

Player voices pose issues when considered in the context of video game sound. Voice can be used as a control interface for games such as Guitar Hero and There Came An Echo. Voice can play a role within the authorship of player and character identity, and change how players experience game narrative. Voice is also used to facilitate coordination between players of multiplayer games and is increasingly common as part of the gameplay experience. 

Yet player voice is often considered to be extradiegetic - external to the fantasy-reality of a virtual environment - despite the role of players as interactive co-authors of the game ‘text’. Given the importance of voice to play experiences, it is possible that voice can become part of the game ‘text’. This paper contextualises player voice and explores the complex relationship between player voice, developer and game ‘text’.

 

Author's Note: I began formulating this paper as part of a presentation at Ludomusicology 2017 whilst I was studying at Royal Holloway. It was received well and my thesis supervisor urged me to consider developing my ideas further. It was eventually submitted as an assessment paper for my MMus Advanced Musical Studies postgraduate degree.