Activity

This page contains information about on-going and historical academic activity. This includes conference talks, symposium participation, roundtables, interviews, etc.

To request further details, copies of presentations or other materials, please contact using details beneath:

email: stephen.tatlow@punkduckproductions.co.uk

email 2: stephen.tatlow.2016@live.rhul.ac.uk

discord: mediaboy#0001

Academic Conferences & Symposiums
 

Dec 2021

The God Complex: Creating, Managing and Moderating Online Communities

Intimacy, Communities and Identities: Digital Platforms and Network Cultures, Chinese University of Hong Kong

Moderating online communities is a form of censorship which is not always well understood. Whilst the role of the moderator is to prohibit ‘unacceptable’ content – a process which is unashamedly non-democratic – the goal is often, to a degree, selfless: the moderator intends to protect the community to enable it to survive and/or grow in perpetuity.

Based on personal experiences moderating online spaces with millions of unique monthly visitors and managing online communities with thousands of daily-active players, the paper explores three critical experiences of moderating online discussion in a gaming group revitalised online throughout the COVID-19 crisis: a new member whose English language skills were gained through listening to American rap music; the discovery that a member held views functionally incompatible with contemporary Western values on women’s rights; management of conflict arising when an older community with a different culture merged into our group.

Through understanding our approaches to these issues, insights can be found into why many online gaming communities do not challenge hate speech or other inappropriate behaviour. It presents successful strategies we adopted to overcome these challenges, and acknowledges the lessons learnt when we failed. Is there a way to modernise approaches to internet content moderation that will allow moderators to challenge misinformation, mitigate radicalisation, and answer new waves of socio-political extremism online?

June 2021

Playback (Only?) As Intended: Reflections on Research into the Music of Final Fantasy XIV: A Realm Reborn

North American Conference of Video Game Music 2021, Virtual Conference, USA

How do the ludic and social aspects of play interact during research into multiplayer games and how might this alter analyses of the game? This paper reflects on the specific difficulties of engaging with research into Massive Multiplayer Online Role-Playing Games (MMORPGs) and explores several different methods for research-play in a multiplayer environment based on research undertaken between 2018 and 2020 into the music of the MMORPG Final Fantasy XIV: A Realm Reborn. Music in Final Fantasy XIV: A Realm Reborn changes during collaborative play experiences, which alters the social experience of the game. As the research had a specifically multiplayer focus, questions surrounding methodology arose: how can the same play event be observed from multiple perspectives simultaneously? A variety of approaches were tried: utilisation of international gaming communities, participation in game-specific communities, “multi-boxing”, LAN-style multiplayer experiences and others. These led to sufficient success required for the completion of the research, but each method had implications for how the game music was received. Players of MMORPGs engage with the game in substantially different ways and are not always conscious of how this affects the musical experience of the game: in what ways could this research be considered ethnographic, or considered hermeneutical? Do these terms have specific value within the study of multiplayer games, where community involvement may be essential to understanding how players perceive and interpret the game-text?

April 2021

Bridging the Gap? Obstacles to Higher Music Education in the UK

Midwest Graduate Music Consortium 2021: Timely Conversations, University of Michigan, USA

Feb 2021

Bridging the Gap? Obstacles to Higher Music Education in the UK

Symposium on the Arts and Decolonisation, Royal Holloway, University of London, UK

Whilst indications of progress are present in the fight to minimise discrimination in university admissions in the UK, a 2020 quantitative analysis of gender demographics for Higher Music Education (HME) courses in the UK between 2014 and 2020 revealed a significant gender divide in specific areas. Generic music degrees and degrees combining music and theatre (e.g., Musical Theatre) have predominantly female populations, whilst degrees combining music and technology and degrees focused around the study or performance of popular music have predominantly male populations. These are indications of a continued and substantial gender gap which must be bridged in future access and inclusivity initiatives.

To identify potential areas for impactful work, a qualitative survey of advertised entry requirements for undergraduate degrees matriculating in 2021 in music-related subjects at universities in the UK was conducted. This revealed five main types of entrance requirements for HME in the UK: academic qualifications; non-academic qualifications; artistic requirements demonstrated through creative portfolio, interview or audition; further study requirements foundation years or additional study years, and; evidence of musical engagement.

By examining existing data surrounding these pathways, three key questions can be answered: what obstacles to access can be observed in this study? What difficulties do those working in the UK face as they to challenge these obstacles? What further data is required to maximise the impact of diversity initiatives?

May 2020

Music and Fictional Virtual Reality

Participation in a roundtable hosted at Royal Holloway, University of London, UK

The mid-to-late 2010s saw a renewed interest in virtual reality technologies with the release of a wide selection of consumer VR headsets and glasses, and the increased power of smartphones to provide augmented reality experiences. While research on sound and music in these technologies has started to gather pace, practice and research continue to grapple with significant creative and practical questions. In addition, such discussions have tended to be limited within disciplinary or professional silos. The roundtable presented here was an effort to begin conversations across a variety of fields of research and practice. We shared perspectives and exchanged views informed by different disciplinary traditions and experiences. We also sought to identify key questions and issues regarding music and sound in VR/AR. Three main themes are presented here: 1) Spaces and musical performances, 2) Realities and realism, and 3) Movement, orientation, and disorientation.

This roundtable is available through the Journal of Sound and Music in Games

April 2020

Speak of the Devil: (pseudo-)player voice in video game marketing

Ludomusicology 2020, Online

Popular reception to video game trailers is often at its most polarised when marketing campaigns attempt to capitalise on the social and emotional power of player voice. Voice-led trailers for Anthem (Bioware, 2018), EVE Online (CCP Games, 2003), The Division (Ubisoft, 2016), Rainbow Six Siege (Ubisoft, 2014) and others have been released as part of video game marketing campaigns. By using (pseudo-)player voice in video game trailers, marketing agencies attempt to demonstrate the social experience of their multiplayer games and perhaps even authenticate their trailer as being representative of real play.

 

Whilst some designers do manage to integrate idealised player speech cohesively into video game trailers, many fall flat: player reception to voice-led video trailers is often muted at best. In some instances, player reception has been completely contrary, with reimagined remixes offering criticism of both the marketing campaign and the development studios and publishers. Often, it seems that players find voice-led trailers to be “cringeworthy”, inaccurate or otherwise non-representative of their player experience. This demonstrates that players and game studios often seem to have different perceptions of player voice within video games.

 

This paper explores several trailers and their audience response, identifying what makes voice-led marketing material receive positive feedback from the intended audience. This allows us to suggest how players may envisage voice within their play experiences.

April 2019

Halo: Transcription Evolved - Overcoming Issues In Transcribing Modular Video Game Score

Ludomusicology 2019, Leeds Beckett, UK

Halo: Combat Evolved utilises non-linear music to accompany narrative. Music is generated using randomised, algorithmic and interactive, real-time processes. However, transcriptions of the score typically portray a single simplistic variant of this infinitely variable music. As a result, these transcribed portrayals of the score fail to accurately present the working practices and musical design of the composer and sound designer Marty O’Donnell. Relationships between music and interactive content are specifically lost, as is the algorithmic nature of the perceived musical score within the game. To account for these issues, this score can be represented as a series of layers and loops in a modular score similar to that outlined by Medina-Grey.

 

However, even with an understanding of how we could represent the score, the transcription process for Halo: Combat Evolved does not become straightforward. Investigating the “black box” of the video game world through play poses a number of issues, both specifically to Halo and generally to all video games. Amongst the most obvious and frustrating issues: music cannot be fully isolated within the game engine without cracks or hacks, sound layers are mixed ‘automagically’ by the sound engine and gameplay triggers for loops and layers are inconsistent. Elements of randomness combine with contradictory information about the music and sound of the game to cause greater difficulties in identifying loops and layers.


These difficulties are not unique to Halo: Combat Evolved. Through an exploration of the process of transcribing the well-recognised music of Halo, this paper demonstrates and suggests applicable methods such as waveform manipulation & analysis, gameplay routing and game engine manipulation that can be applied when examining video game music through “black-box” recordings.

Feb 2019

Madness Effect: Audiating Mental Instability

Constructing the Moving Image, RMA Study Day, Huddersfield, University, UK

In Bioware's Mass Effect franchise, the effects of war cause characters to develop poor mental health. Characters begin to hallucinate visually and audially, become obsessional, suicidal, experience nightmares and poor sleep, or express other symptoms of poor mental health. Some of these symptoms are the result of “indoctrination” by the antagonists of the game, whereby they mentally compromise key figures within the game’s narrative in order to further their agenda. In the final installment of the original trilogy, the player character also begins to experience symptoms of poor mental health. This allows us to explore how episodes of mental instability are demarcated throughout the Mass Effect franchise, particularly through changes to the franchise’s established conventions of scoring and sound design, and investigate how Bioware attempt to communicate the player character’s deteriorating mental health to the player.

April 2018

Sound Knowledge: Replacing Graphical User Interfaces in PAYDAY 2

Ludomusicology 2018, HMT Leipzig, Germany

Payday 2 seems superficially similar to many other first-person shooters and stealth games. The Graphical User Interface (GUI) contains typical shooter indicators for health and ammunition alongside typical stealth-game indicators for suspicious and alerted enemies. However, Payday 2 also omits or limits a number of elements found in GUIs common to these genres, such as player radars, objective markers and ability timers. Instead, these commonplace GUIs  are replaced with auditory interfaces.

 

This paper deconstructs two levels from the co-operative first-person stealth-shooter Payday 2 to demonstrate how auditory elements can be used within interactive media to replace elements of user interface that are conventionally visual. It examines music, dialogue and  sound to build an understanding of how players must interact with the audio of the game. 

 

To successfully navigate the game world and find ludic success, players must develop an understanding of the game audio in what seems similar to the knowledge described by Bourgonjon as “video game literacy”. This may help to immerse players more completely within the game following principles of Grimshaw and Ward, and allow us to establish a basis for examination of immersive audiovisual environments such as those found in virtual reality.

Call 

123-456-7890 

Email 

Follow

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram