Undergraduate Dissertation – The Affect of Sound Visualisations on deaf gamers’ experience of mood in videogames

Project Journal here


This research explores an alternative approach to captions as an accessibility provision for deaf gamers, in the form of sound visualisations, and how these can convey mood. This project has been informed by ideas from Richard A. van Tol, and the theories of Donald Norman, Michel Chion and Kristine Jorgensen. Through self-completion questionnaires and user testing, the aim is to put into practice these theories to realise an approach towards designing sound visualisations that is beneficial to all gamers, regardless of ability.





Definition of Terms 

Deaf – BSL (British Sign Language) is their first language, English is most likely the second language.
deaf and oral – moderately to profoundly deaf and uses English as their first language (as opposed to BSL as their first language).
Partially deaf – deaf in one ear.
Hard of Hearing – mild or moderate hearing loss.
Deafened – have become deaf either gradually or suddenly.
Subtitles – words across the bottom of the screen for spoken dialogue only.
Captions – words across the bottom of the screen for spoken dialogue and sound effects.




As gaming becomes more mainstream, developers need to consider the needs of a very diverse audience, especially those with special access needs. According to the Game Accessibility Special Interest Group, as much as 10-20 percent of the population of most countries are classed as ‘disabled’.(GA SIG, 2004) From a gaming perspective, an estimated 10% of the gaming population is thought to be disabled (Ablegamers, 2009), while a survey of casual gamers commissioned by PopCap found that more than 20% of them classed themselves as disabled. (Ablegamers, 2009) In the UK alone, an estimated 16% of people are classed as deaf or hard of hearing (Ladd, 2007: 32) which includes those people who have naturally lost hearing in later life. Therefore it is not only ethical, but financially desirable to create games that are accessible to a wide audience.

However, current research in games has predominantly focused on ‘visible’ disabilities such as visual impairments (Buxton, 1994) and mobility issues. When it comes to deaf gamers, accessible provision can be inconsistent, and where provision is provided it is usually in the form of subtitles and/or captions. Putting text on screen is seen as the only option available for deaf gamers. While captions can perform a usability function fairly well in most cases, sound in games performs a much greater role than enhancing usability alone. It can also create mood and evoke an emotional response in the player.

This project asks whether captions are the best option for representing sound’s ability to portray mood, or whether we should consider other alternatives. In his article ‘The Sound Alternative’, Richard van Tol proposes sound visualisations as one of many alternative approaches to captions. This research builds on this idea and asks whether it is captions or sound visualisations which have the greater capacity to affect the player. On a wider scale, by exploring these issues, this research aims to promote deaf awareness and social inclusion amongst developers and gamers.

This research project will aim to answer the following questions:

  • What alternatives are there to textual signifiers as an aid to deaf gamers’ awareness in videogames?
  • How effective are textual and non-textual signifiers at suggesting mood in videogames?
  • To what extent are these signifiers seen as intrusive by hearing gamers?


Literature Review 

To frame this research, firstly this review will examine the role of sound, both in terms of usability, and aesthetic functions. Kristine Jørgensen’s article entitled ‘Left in the Dark: Playing Computer Games with the Sound Turned Off’ (Jørgensen, 2008) provides a valuable insight into how important sound is to games, while Michel Chion’s book ‘Audio-Vision: Sound on Screen’ (Chion, 1994) presents a comprehensive understanding of sound in film, particularly the relationship between sound and image.

Secondly, this review will look at captions and consider how we might design emotionally engaging visual alternatives using the theory of Affect as discussed by Donald Norman in his book ‘Emotional Design: Why we love (or hate) everyday things’ (Norman, 2004). This will be complemented by Clarisse Sieckenius de Souza’s theories on semiotic engineering. (de Souza, 2005)

Finally, we will look at accessibility in games and the argument for universally accessible games (Grammenos and Savidis, 2006).

The Importance of Sound in Games 

In her article ‘Left in the Dark: Playing Computer Games with the Sound Turned Off’ (2008) Kristine Jørgensen argues that sound plays a vital role in games beyond usability. While it would be irresponsible to assume that deaf people simply cannot hear sound at all – in fact, they vary greatly in terms of their individual level of hearing – Jørgensen’s article is important because it highlights the impact that diminished access to sound can have on the player’s experience of the game. The removal of sound, argues Jørgensen, affects three key attributes of the game-playing experience: the player’s sense of control, sense of presence in the gameworld, and their attentive focus. (Jørgensen, 2008)

Jørgensen conducted a study where gamers were asked to play a sample of the real-time strategy game Warcraft III (Blizzard, 2002), or the stealth-’em up Hitman Contracts (Io Interactive, 2004), firstly with sound, then without. Players’ immediate reaction to the removal of sound was a feeling of loss of control. This was, notes Jørgensen, especially prevalent in Warcraft III since the player oversees many units, which can also act independently at times. The result being that, with sound removed, the player was unaware of events happening off screen such as units being attacked (Jørgensen, 2008 : 166). Even on screen events, such as skirmishing units, were difficult for players to evaluate without sound since there were so many units on screen (2008:169). These findings are supported by Chion who says that the eye is much slower at processing information than the ear (Chion, 1994), and that sound helps to ‘spot’ an image, allowing us to perceive rapid movements. (Chion, 1994)

Sound also supports the player’s spatial awareness of the gameworld necessary to evaluate the current state of the game (2008:168). In situations where a sound’s source is off screen – what Michel Chion calls ‘acousmatic sounds’ (Chion, 1994) – without sound, the player is disadvantaged because they cannot orient themselves to the relevant auditory cues (Jørgensen, 2008: 168) In Hitman Contracts, for example, Jørgensen found that players’ anxiety increased because they could not hear the footsteps of approaching enemies or doors opening behind the player character. (Jørgensen, 2008: 167,172)

In addition to providing usability feedback, sound also gives the player a sense of presence in the gameworld. (Jørgensen, 2008 : 171) Sound expands to fill a space. By giving it definition (Stockburger, 2003; Bridgett, 2005), and even the character of a particular era (Bridgett, 2005), players become emotionally engaged in the virtual environment. Without sound, the artificiality of the game resurfaces and players lose their emotional attachment to the graphics on screen, and consequences in the game become meaningless. (Jørgensen, 2008 : 172) The player’s immersion or ‘flow’ (Van Gorp, 2008) is interrupted.

Lastly, Jørgensen suggests that removing sound may actually remove distractions from the game. (Jørgensen, 2008: 173) Without sound, players were forced to rely on only the visual channel for information. This caused them to become more observant and systematic in their playing style and less likely to be interrupted by minor audio cues. (Jørgensen, 2008)

While the lack of sound meant that participants’ reaction time increased since they had to see events in order to respond to them (Jørgensen, 2008 : 170), Jørgensen suggests that their attentive focus remained the same. (Jørgensen, 2008 : 174) This is a little surprising considering it was sound which was first used to drown out external distractions in the arcades. (Collins, 2008) However, both Jørgensen and Chion suggest that visual perception naturally increases to substitute for the lack of auditory perception.(Jørgensen, 2008 : 174; Chion, 1994) Chion even suggests that the visual centres in deaf people who sign may be much more enhanced than those of hearing people.(Chion, 1994)

The Problem with Captions/An Alternative to Captions 

The reliance on audio as an information channel can present major problems for deaf gamers where a visual alternative is not provided. As has been discussed, not having access to crucial information such as the approach of an enemy or the dialogue in a pre-mission briefing can leave gamers disoriented and frustrated. (Jorgensen, 2008) One solution to this problem is to complement sound in games with closed captions. Closed captions can be toggled on and off and provide text on screen for both dialogue and sound effects. (Kimball, 2005)

Half Life 2 by Valve Corporation (Valve, 2004) is an exemplary model of captions done right in a game. Following the release of the original Half Life (Valve, 1998), problems experienced by many deaf gamers due to uncaptioned cutscenes were brought to Valve’s attention. The developers listened to the deaf gamers, providing the script for that game, and building a closed caption system right into the sequel, Half Life 2. (Bierre et al, 2005) The closed captions system required very little additional work and was well received by gamers in the deaf community (Bierre et al, 2005)

Doom 3, another first person shooter, was released around the same time as Half Life 2 but did not feature captions. (Bierre et al, 2005) The modding tools provided by the developers allowed an enthusiastic group of people, led by game developer Reid Kimball, to create a closed caption system for the game. (Bierre et al, 2005) In his article ‘Games Need Closed Captioning’, Kimball urged developers to include captions in their games to reach as large a market as possible and allow deaf gamers to experience the game fully, and participate in game-related discussions. He also highlighted captions as beneficial for hearing gamers, allowing them to follow dialogue in noisy situations, and for non-native language speakers who may have difficulty with the game’s default language. (Kimball, 2006)

The high profile of these two cases has meant that most modern games now include closed captions. For example, recent releases such as Bioshock 2 and Mirror’s Edge include subtitles, although it should be noted that these are not enabled by default and some cutscenes are not subtitled. (deafgamers, 2010) However, while captions are a vital improvement in accessibility for deaf gamers, they do not provide deaf gamers with the full game experience which Kimball suggests. Of all the functions of audio previously discussed, captions enable deaf gamers to play the game – they perform a usability function – but they do not convey the emotional resonance normally provided by audio. (van Tol, 2006) Furthermore, captions themselves may remain inaccessible to members of the deaf community who use sign language, since a spoken language like English is not their first language. (van Tol, 2006) Games are currently localised for different countries, however localising text for sign- language users is not possible without loss of meaning, as Rachelle Hole testifies:

Similar to a silhouette, the texts in front of me were a manifestation, a reproduction of the visual and visceral experience, but it appeared featureless and lacking the important nuances of the performed texts. (Hole, 2007: 703)

In his article ‘The Sound Alternative’ (van Tol, 2006), Richard van Tol proposes that we consider other visual alternatives. Sound visualisations in particular, he suggests, can convey emotional meaning which would normally be provided by sound. (van Tol, 2006) Donald Norman’s work on emotional design is invaluable in this instance. By designing sound visualisations with his three levels of affect in mind – the visceral, behavioural and reflective – we can encourage an emotional response in gamers, much like that provided by sound.

Affect is a judgement system; it tells us if things are good or bad, safe or dangerous. It’s an instinctive, gut-reaction to sensory input, an evolutionary self-preservation system. (Norman, 2004: 11-13) When playing a game or watching a film makes us experience joy or sadness, that’s the affective system talking. Emotions are the manifestation of the judgements made by the affective system. (Norman, 2004: 11-13)

These responses occur on three levels, identified by Norman. The first, the visceral level, is the conscious, automatic response to sensory input. This is where first impressions count: Look, touch and sound are all important here. (Norman, 2004) The affective response here generates positive or negative affect, in other words, positive and negative emotions. (Norman, 2004) Norman suggests that we all share prewired affective responses to certain input as part of our survival mechanisms. Norman (2004) neglects the role social engineering plays in our individual development, but as involuntary, automatic responses, there is some truth in his list of positive and negative stimuli.

Norman recognises that we do not all share the same tastes, just as some people willingly subject themselves to horror games, while others do not. He explains this by suggesting that as humans we have the capacity to overcome our biological predispositions and learn to enjoy otherwise affectively negative experiences. (Norman, 2004) Reasons for this behaviour, Norman suggests, include the addictive pleasure experienced during adrenaline-fuelled pursuits, and a trial by fire ethos; seeking the peer recognition that comes with conquering a fearful experience. (Norman, 2004: 24) This explanation only supports intensely negative situations; it does not explain why people choose to override their affective responses outside these examples.

Most games appeal to the visceral level, at least initially. With attractive visuals, big explosions and rousing soundtracks worthy of any Hollywood action film, games are an engaging feast of sensory input. Both positive and negative affect can be used to attract potential gamers: attractive cover art or a disturbing trailer can both attract particular audiences. The two are not mutually exclusive; during the game, players can experience both positive and negative affect thanks to the developer’s choices at the visceral level. For example, the visual appearance of the game can be manipulated through texture and lighting choices to encourage the happy, playful, freedom of LittleBigPlanet (Sony, 2008), or the dark, moody, tension of the Splinter Cell games.

Sound then, can be altered to affect the player’s emotional state. Simply switching from pleasant, relaxing background music, to sharp, ringing alarm systems changes the focus from positive to negative affect. (Norman, 2004: 27-28) Done right, this can be beneficial to the player depending on their current goal: Positive affect encourages creative thought, making it easier to find alternative solutions to problems, while negative affect promotes focus, removing distractions, but at the risk of blinding the player to obvious solutions. (Norman, 2004 : 27-28) This is a guideline for game usability in general, since punishing the player repeatedly will encourage negative emotions causing them to leave the game when they cannot see the solution. Therefore, an accessibility system based around emotion should strike a fine balance between usability and affect.

The second level of affect identified by Norman is the behavioural level. Here, the experience using the product matters more than its outward appearance. (Norman, 2004: 69-70) This is where good design promotes the ‘pick up and play’ nature of games. To do this, behavioural design needs to address four key areas: function, understanding, usability, and physical feel.

Function refers to whether a product does what the user expects it to do. The intention behind the sound visualisation system was that it should visualise sounds, performing the same role as closed captions currently do, but with the additional aim of encoding emotional meaning in the visualisation. Norman advises that determining the needs of the user can be problematic when trying to create a product where no similar product exists. This is because users cannot communicate their needs without the reflective experience to draw on. (Norman, 2004: 70)

This was certainly the case with the sound visualisations since the only existing reference were the visuals used in The Sims games (Maxis, 2000-10), and communication based symbols used in games like Phantasy Star Online (Sonic Team, 2000). So when asked for their thoughts on sound visualisations, respondents tended to suggest examples of enemy fire direction and health damage systems (red frosting and arrows at the edges of the screen for example). The solution to this problem, Norman suggests, is intensive study of people’s everyday use of products to identify possible improvements. Given the varied geographical location of respondents, this was unfortunately not possible in this study.

In order to play the game, the player has to understand how the game works. Since the original developer is unlikely to be available to explain it to them, the player must rely on mental models. The designer, explains Norman, has their own mental model of how the system should be used, while the player will form their mental model based on the feedback the system provides to them, and possibly the documentation supplied with the product. (Norman, 2004: 75-76) To use the software effectively, the designer’s model and the user’s model should match, and to achieve this the software itself acts as the “designer’s deputy” (de Souza, 2005)

Good feedback from the game is essential for this to work. The player must know that their commands have been received or that the game is expecting them to do something. (Jørgensen, 2008) It must also be understandable feedback; not relying on ambiguous metaphors and clever use of multimedia. (Norman, 2004: 81)

Usability, which has been touched on already, is how easy the product is to use. It can still be complex, like a typical fighting game with multi-button combos, but people shouldn’t have to spend years learning how to use it. This is also where universal usability matters, since the product will be used be a wide variety of people, each with individual abilities. (Norman, 2004: 78) Should we exclude potential players because they don’t have the same level of hearing as everyone else? Are they somehow less entitled to play the game? We will revisit the argument for universal design later in this review.

The final key component of behavioural design is the physical feel of the product. In the world of digital imagery, we have lost the tactile sensation of physical objects. This gap is somewhat bridged by force feedback devices, but Norman suggests that we have lost some of the emotional pleasure associated with handling physical artefacts. (Norman, 2004 : 79-80) However, it may be possible to represent this through sound visualisations. Sound has a distinct ‘presence’. It can convey to the listener the size of a room and the surfaces which reflect the sound. Therefore, by visually representing the behaviour of sound – its reverberation, tempo, frequencies and so on – it may be possible to convey the physical essence of the sound to a deaf gamer.

The behavioural level usually makes or breaks a game. Regardless of the aesthetic pleasures of the game, if the player cannot play it due to poor controls, lack of feedback, or incomprehensible meanings, they will think twice about investing further time in the game.

Norman’s third level of affect is the reflective level. This is the level of understanding and critical thinking, about conveying meaning. The reflective eye delights in and is satisfied by deeper meanings in design. (2004: 126-127) The break in immersion that Jorgensen discussed comes when the reflective level spots something out of place, or fails to interpret meaning, that causes it to raise nagging questions about the experience.

This is also the level of self-image. Norman talks about the affect of brands and how they are deliberately targeted at certain groups of people to make them feel a certain way. A positive past experience with a product is recalled at the reflective level, reaffirming our attachment to the brand. (Norman, 2004: 84-88) The products we use speak about who we are and how we want others to see us. Self-image is an important design consideration when considering the people that will end up using the product. As will be discussed in the next section, reinforcing a positive identity, particularly where deaf gamers are concerned, is an important first step in creating inclusive games, aimed at all gamers, regardless of perceived ability.


The Need for Social Inclusion in Games 

Grammenos and Savidis’ (2006) article, Unified Design of Universally Accessible Games (Say What?) focuses on the importance of creating games that are accessible to people of all abilities, irrespective of their personal characteristics. They emphasise the need for the gaming industry to produce games that can be played concurrently amongst disabled and able gamers ‘either remotely or by sharing the same computer’ (2006: 1).

They go on to explain that there are currently only two distinct ways of creating inclusive games: inaccessible games that become accessible through third party assistive technologies, such as ‘screen readers, mouse emulators, or special input devices.’ (2006: 1). They stress that even when this sort of accessibility is achieved, it is only usually through customised low level adaptations or coincidence ‘rather than the outcome of appropriate design considerations’ (2006: 1). The second distinct way of creating accessible games is through games specifically targeted at people with particular disabilities, such as audio based games for blind people, and ‘single switch games for people with severe motor impairments on the upper limbs.’ (2006: 1).

Grammenos and Savidis (2006) go on to discuss how there are a number of drawbacks with developing games with particular disabilities in mind, namely dependent on the cost effective return aimed at a small market, and the possible segregation of disabled gamers, who may become further marginalised and excluded within the gaming market and community. The authors discuss a new solution to this problem: the development of new software and a new game creation system called ‘Universally Accessible Games’ (abbreviated to UA-games) introduced by the Human-Computer Interaction Laboratory of ICS-Forth (Grammenos and Savidis, 2006: 1). The aim is that UA-games seek to achieve a Design For All dynamic, ‘being proactively designed to optimally fit and dynamically adapt to different individual gamer characteristics without the need of further adjustments via additional developments’. (Grammenos and Savidis, 2006: 1). UA-games also aim that their concept will be used to create games that can be played concurrently by people of different abilities (whilst sharing the same computer) and can be played using different software and hardware platforms, within different environments (Grammenos and Savidis, 2006: 1).

Grammenos and Savidis (2006) also discuss the importance of final game prototypes and testing in order to gauge the effectiveness of particular accessibility measures within the game. They discuss how ‘The usability and accessibility of these prototypes should definitely be evaluated with representative end-users. In this respect, a quick, handy and very effective informal evaluation method is thinking aloud’ (2006: 5). They describe this method as an evaluator (or more than just one) observing a gamer interacting with the system, ‘asking for vocalisation of thoughts, opinions and feelings, in order to understand the gamers mental model and the system model. Conversations are usually recorded so that they can be analysed later on. Furthermore, to support the evaluation process, a list of indicative tasks is used, that prompts participants to explore the full game functionality available.’ (Grammenos and Savidis, 2006: 5). This level of participation and feedback allows the developers to create a game that satisfies accessibility requirements for everyone.


Methodology Data Collection 

Since the aim of this project is to evaluate the player’s experiences using sound visualisations as an alternative to captions, the research strategy employed was primarily qualitative. This meant an inductive approach to theory and an interpretivist epistemological orientation which sees the social constructs created by human beings as meaningful to them, and in a state of continuous revision, rather than as fixed, external entities. (Bryman, 2008: 19-20) The particular social constructs encountered in this project are that of ‘deafness’ and ‘disability’, as constructed by both hearing and deaf gamers, and how these may differ in each case. Inevitably, however, there was also an element of quantitative research involved. (Bryman, 2008: 22-23)

The research sample for this project was determined by snowball sampling. (Bryman, 2008: 184) Questionnaires were posted on a number of gaming forums, including those specifically aimed at deaf gamers, and passed to a small group of real life contacts. Initial respondents were encouraged to refer their friends and family to the research to increase the sample. This approach could also be described as a convenience sample since it had use of the accessibility of gamers browsing the online forums. (Bryman, 2008: 183)

While the sample generated is unlikely to be representative of the population – the population being ‘people who play computer games’ – in this case the concept of a population is problematic since the status of individuals is constantly fluctuating: their classification as ‘gamers’. (Bryman, 2008: 185) There was also no readily-available sampling frame – ‘deaf people who play computer games’ – from which to select a sample (Bryman, 2008: 185), and, since it was felt that actively involving respondents in the generation of the sample would help to establish trust with the deaf community, snowball sampling was preferred in this case.

Initially, some general questions were asked on a number of gaming forums – AbleGamers, Deaf Gamers, DeafGamersOnline (server now offline), Edge Online, Interlopers.net, NowGamer, and rllmuk – to gather feedback on the focus of the project. There was a large response, particularly on the general gaming forums as opposed to the deaf-oriented forums. There could be many reasons for this: historically deaf people have been studied on by researchers in attempts to ‘help’ them, rather than being respected as complete and whole human beings with worthwhile contributions of their own (Ladd, 2007: 7-8), and could therefore be suspicious of research in general, especially if the researcher-respondent relationship appeared hierarchical, with the researcher in a position of authority. (Bryman, 2008: 212; Hall and Hall, 2004: 119-120) With this in mind, every attempt was made to be as transparent and forthcoming as possible about the project to hopefully alleviate respondent’s concerns.

It may simply be that the topic failed to attract users’ attention. Some respondents suggested that the initial post title – “feedback wanted please” – was too vague. This was intentionally so to prevent putting off people with negative preconceptions about deafness, however a clearer wording may have been more appropriate (and was adopted for the preliminary questionnaire).

Overall though, the initial response to the research area was very positive with several respondents showing genuine excitement about the possibilities for representing sounds in games. A common suggestion was the use of force feedback devices, but unfortunately this was outside the scope of this project. It was more difficult, however, to discuss sound visualisations themselves, particularly in terms of how they could be used to create mood, since the majority of respondent’s experience with sound visualisations in games were damage indicators and communication systems.

Following the discussion, a preliminary self-completion questionnaire, under the heading “Survey regarding captions and sound visualisations in games”, was posted on each of the forums (with the exception of DeafGamersOnline which no longer appeared to be active). A couple of follow-up questions where then issued by email, and a further three, shorter, self- completion questionnaires were provided with the digital prototype.

Given that respondents were sourced online, it was impossible to predict their geographical location beforehand, therefore the convenience and inexpensiveness of self-completion questionnaires was useful since it meant that they could be distributed with minimal effort. This also made it more convenient for respondents who could complete the questionnaire in their own time. (Bryman, 2008: 217-218)

Another advantage of the self-completion questionnaires was the removal of bias induced by having a researcher present. Respondents were able to complete the questionnaire in anonymity without outside influence (Bryman, 2008: 218), and with one-to-one communication (using email in this case), would be more likely to discussing sensitive issues. (Hall and Hall, 2004: 121)

The online delivery also meant that all respondents were asked questions exactly the same way (Bryman, 2008: 218), and the possibility of unanswered questions was lowered, at least for those questions marked as compulsory, since the form would warn respondents about uncompleted questions. To avoid putting off respondents, the number of questions in the questionnaire was limited to fifteen and predominantly featured closed questions. (Bryman, 2008: 218-219)

The disadvantage of using self-completion questionnaires was the lack of opportunity to ask respondents to elaborate on their answers or to rephrase questions when respondents had difficulty. (Bryman, 2008: 218) This became apparent when some respondents did not agree with the categories used to describe different levels of hearing and felt obligated to categorise themselves in this way. However, forum discussions were continued throughout and, when this issue was raised, very fruitful discussions were held regarding the different points of view. It was stressed that the questionnaire was in no way intended to offend or categorise people and that the wording chosen was borne out of necessity.

Ideally, these problems would normally be addressed using follow-up interviews. However, it was felt that there was sufficient information provided by respondents in their forum responses, self-completion questionnaires, and responses to follow-up emails. Time was also a major consideration and, while it would have been possible to hold interviews via online messaging software, this was deemed unnecessary and unfeasible.

Ethical considerations are paramount when conducting research involving living human beings. As the previous example highlights, there existed the possibility of causing harm to respondent’s identity as deaf individuals. To avoid this, a conscious effort was made to be as open and honest as possible about the aims of the research and what would be involved (Bryman, 2008: 121-123), as well as how the study could be of mutual benefit to all concerned. (AEA, 1994: Principles D4, in Hall and Hall, 2004: 81)

Respondents were reminded that participation was completely voluntary and that they could choose not to participate further at any time. After providing respondents with as much information as was deemed appropriate for them to make an informed decision, their signed agreement was obtained in the form of an email address (respondents who did not provide an email address were not contacted further regarding the project). (Bryman, 2008: 121:123) Covert tactics were deemed unnecessary and would have undermined the trust between all parties.

In addition to keeping respondents informed, respecting their confidentiality was also imperative. (Bryman, 2008) Respondent’s true identities were protected using pseudonyms in both the research notes and in the published thesis, and their answers were not associated with any demographic information, such as country of residence, which may allow them to be identified.

While respondents’ original questionnaires were stored online, these were held in securely- maintained servers provided by the developers of the survey software. Under their terms and conditions, they will never sell or export this data without explicit permission, and all data will be irretrievably destroyed within two days notice. In accordance with the Data Protection Act (1998), notice will be given to destroy this information once it is no longer necessary to maintain it. Additionally, printed transcripts of respondents questionnaires used for note- taking were stored in a locked strongbox when not in use.

Designing the Questionnaires 

The online questionnaires were created using the Survey Gizmo software provided by Widgix LLC through their website http://www.surveygizmo.com. The advantage of this product over similar ones elsewhere was the lack of any restrictions on the number of questions and reports which could be generated. The reports also supported limited customisation (at the non-subscriber level) and included tables, bar charts, and pie charts, as well as Excel and .csv export.

The purpose of the preliminary questionnaire was to gather an overview of respondents’ gaming habits, attitudes to accessibility in games, and their own experiences with accessibility systems, such as captions. To make sure these areas were addressed, the questionnaire was broken down into three distinct sections with some overlap with the areas they addressed:

Section 1: Background Information 

This section was used to gather some light demographic information from respondents including their sex, age and level of hearing. While the term “deaf” is used to encompass all levels of hearing disability in this project, and the sound visualisations themselves are aimed at both hearing and deaf gamers, it was useful to determine whether respondents were deaf signers or deaf lip-readers, both to compare their experiences using the visualisations, and to consider any practical requirements they may have.

Section 2: Gaming Habits 

Respondents were asked how often they play games, which genres of games they tend to play, and which gaming consoles they owned. Given the difficulty obtaining contacts from within the deaf community as a minority group, no conditions were imposed on the level of gaming activity to be used as a determining factor for inclusion in the study. Furthermore, the research itself aims to be beneficial to all gamers, regardless of whether they are hardcore or casual gamers.

Respondents preferred gaming genres were used to determine what style of prototype to develop (First-Person adventure) and to determine whether there was any correlation between their preferred genres and their experience of accessibility support. Finally, their choice of gaming platforms was included to determine a suitable control scheme for the prototype, and again whether their experience of accessibility support was reflected in their choice of platform.

Section 3: Accessibility 

This section asked respondents whether they experienced any sound-related difficulties when playing computer games and to highlight the particular genres or games where this had occurred. This helped to highlight the extent of issues faced by deaf (and in some cases, hearing) gamers when relying on sound as a feedback device. Respondents were also asked whether they had encountered subtitles and captions in games, and to gauge their attitude to accessibility systems were asked to explain why they would or would not use these features. They were also asked how much they felt the lack of this support would impact their playing experience, as well as their decision to purchase a game.

The questionnaire ended by reminding respondents that participation was voluntary and that their answers would be kept anonymous and confidential. It was at this point that respondents were invited to sign a declaration of consent to continue participating in the project.

User Testing 

To explore the research question, a computer game prototype was created using the Unity 3D game development software (Unity Technologies, 2010). The scenario involved a student’s attempt to sneak into a high school to retrieve a student file from the school office. The gameplay consisted of exploring several rooms in the school, avoiding patrolling janitors, and finding a key to gain access to the office. The game featured several sound effects – both ambient and interactive sounds – and these were represented in three versions of the game by three different visualisation systems: captions, symbolic visualisations, and combined visualisations.

The captioned version of the game showed the common white text on a black box style found in closed caption television programmes in the U.K. Square brackets were used to denote sound effects, as opposed to dialogue which did not have these (hints in the game were delivered as an internal monologue by the player character), and music was given the pound sign (#) notation.

The symbolic visualisation version of the game replaced the captions with an animated graphic in the player’s Heads Up Display (HUD) representing the sound being heard, and an animation in the game-world, emanating from the source of the sound. For example, musical notes floating from the stereo in the lounge, accompanied by similar notes rising in the HUD.

The combined visualisation system used a combination of text and sound balloons to represent sounds. There were no captions or HUD animations in this version, instead sound sources each had an animated sound balloon in their immediate vicinity. Inside this sound balloon was an onomatopoeic word describing the sound. For example, the dripping shower- head in the bathroom was accompanied by a rippling sound balloon containing the word ‘drip’.

The aim of the prototype was to present players with possible alternatives to captions, and to evaluate their effectiveness as both accessibility systems and their capacity to convey mood. The systems presented were rather crude, but they were distinctive enough to differentiate them from one another.

To evaluate the three systems, players were asked to complete a short questionnaire after each version. The questionnaires were limited to ten questions and were almost identical, except for a few system-specific questions, so that player’s feedback on each system could be directly compared. Predominantly open questions were used for these surveys since qualitative data about the players’ experiences was the focus, and these were divided between two sections: Usability and Visual Representation of Sound.

Section 1: Usability 

This section was used to determine whether respondents had any problems playing the game that were caused by either of the three systems. Additionally, gamers were asked whether they felt that the current system added to their enjoyment of the game or whether they found it disruptive to their playing experience. Specifically, these questions relate to the third facet of the main research question: whether gamers find sound visualisations intrusive.

Section 2: Visual Representation of Sound 

This section was concerned with how gamers’ interpreted the sound visualisations, both in terms of their appropriateness as usability functions, and the particular mood they suggested. Additionally, respondents were asked to highlight their preference between the three systems, and to discuss whether their views on accessibility in games had been affected by the research project. These questions relate to the first and second facets of the main research question: whether there can be an affective alternative to captions. Finally, respondents were encouraged to suggest ways in which these systems could be improved. Their answers raise other avenues of study for future research in this area.


Data Analysis 

A total of 93 responses to the preliminary questionnaire were received, with 3 responses flagged as incomplete by the Survey Gizmo software. At the non-subscriber level, no information is available on partial completions, therefore these have been omitted from the analysis. Additionally there were 42 abandoned questionnaires, described by Survey Gizmo as respondents who “loaded the survey but did not answer any questions” (Survey Gizmo, 2010). These were essentially non-submissions.

Of the 90 complete responses received, 28 elected to participate further in the research, and it is these responses which have been used in the analysis. Finally, when respondents were asked to play the prototype computer game, a total of 10 responses were received, with 2 of these only completing the questionnaire for the first (captioned) version of the game, and the remaining 8 completing all 3 questionnaires. The reason for the lowered response could be attributed to the significant length of time between the preliminary questionnaire and the release of the prototype. While follow-up emails were issued to clarify concepts and maintain interest, releasing the prototype sooner may have proved more fruitful.

The largest response to the preliminary questionnaire was from the rllmuk forum with 30 responses, followed by the Edge Online forum with 21 responses. Some responses came via the Computer and Videogames forum and email referrals, indicating that the questionnaire had been circulated by respondents. The three questionnaires related to the prototype were only made available to the 28 respondents who elected to participate further in the study.


Data Analysis Method 

To make manipulating the data easier, it was coded using the Apple Numbers spreadsheet program. The closed questions of the preliminary questionnaire were coded using a 1 for yes, 2 for no scheme, with multiple choice answers coded in ascending order. A coding manual was created to track the correlation between values and answers. This made it easier to sort and generate graphs from the data. While many of the answers were pre-coded by the Survey Gizmo software, the export function made the data unwieldy, therefore a hands on approach to coding was preferred to ensure accuracy. Examples of the codings are shown below (fig). Respondents’ have been anonymously identified using the last three digits of their Survey Gizmo barcode.

Coding frame example 

Coding Frame Example

Coding manual example 

Coding Manual Example


To prevent fragmentation, a thematic approach was used to code the qualitative questions of the preliminary questionnaire and prototype questionnaires. (Bryman, 2008: 553) Respondents answers were evaluated for categories which were then combined into larger themes.


Respondents were predominantly male, accounting for 93% of those taking part, while the two female respondents made up 7% of the sample. The age ranges were almost evenly split between 18 to 25 and 26 to 39. There was only one male respondent in the bracket of 40 years old or more.

Respondents were asked to describe their level of hearing as one of the following categories: Hearing, Deaf (a sign language user, such as BSL), deaf and oral, partially deaf, hard of hearing or deafened. The hearing respondents outnumbered the deaf respondents with 70% of the total (20 respondents), while the total deaf respondents were split evenly across those who relied on lipreading skills (3) or had mild to moderate hearing loss (3), and those who were partially deaf (1) and had become deaf either gradually or suddenly (1). No respondents chose the deafened category.



Throughout this project, the term ‘deaf’ has been used to address all levels of deafness for the sake of brevity.


Subtitles and captions in games 

The preliminary questionnaire revealed that, in this small sample at least, subtitles in games are very prevalent. What was more surprising, was respondents lack of contact with captions.

Of the 28 respondents surveyed, less than a third had encountered captions in games. While all respondents had encountered subtitles before, this small sample raises concerns that the coverage in games is still inadequate.

Why people use subtitles and captions 

The preliminary questionnaire provided a useful insight into the reasons gamers, both hearing and deaf, choose to use subtitles and/or captions in games. Rather than being a tool solely for deaf people, hearing gamers are also turning to captions, for example, when the audio of a game is ‘muddy’. In this situation, some gamers have a hard time picking up dialogue and will turn on subtitles to complement what hearing they have. Subtitles are also used by people whose native language does not match that of the game, either as a learning tool, or to boost the delivery of information. While dialogue is often subtitled in games, FMV sequences are notorious for not being subtitled. Where these sequences include important plot details, deaf gamers can miss out on a lot.

Additionally, deaf gamers can miss in-game audio cues, such as the bonus pickups in Grand Theft Auto IV (Rockstar, 2008) which are signalled by cooing pigeons, or the ‘crackling’ sound in Team Fortress 2 (2007) which signals that the Pyro class is successfully doing damage to another character.

Another reason gamers may wish to use subtitles in games is when playing games on mobile devices. Often it is not appropriate or even possible to listen to the sound while playing unless the gamer has headphones. In some cases, gamers prefer to listen to their own music while playing games and can make use of subtitles for this purpose.

Respondents were asked to highlight particular games or genres where they have encountered difficulty playing games due to their level of hearing. Of the 9 people who answered this question, most identified 3D adventures such as Assassin’s Creed (Ubisoft, 2007), and horror games as genres where they encountered problems.

When asked whether deaf accessible support was important in their decision to purchase a game, 6 respondents said it was somewhat important, 7 said it was very important, and only 1 said it was absolutely essential. It was not an issue for the remaining 14 respondents. When asked whether they would still purchase a game that did not feature accessible support, 22 responded yes, with only 6 saying they wouldn’t.

User Testing 


In the interests of anonymity and ethics, all names of the respondents have been changed.

The usability of captions 

To assess whether the systems demonstrated met gamers’ usability requirements, respondents were asked a few questions regarding their experience using each of the systems. Firstly, respondents were asked whether they had any trouble playing the game specifically related to the sound visualisation system.

In the case of the captions, although there was sound in the game, both hearing and deaf gamers found that relying on captions to navigate was problematic, as one respondent, “David”, illustrates:

It was difficult to judge the position of characters you were hiding from – at what point they had crossed the point you were hiding at (due to a lack of being able to judge location). (David)

While the captions did warn players when npc’s were approaching, the problem was that they did not provide enough spatial information to determine which direction the npc’s were approaching from, or if they had already passed. Therefore gamers’ spatial awareness of the environment was limited and many relied on trial and error to avoid the npc’s, with one gamer, “Michael”, describing this disadvantage as “unfair”, suggesting that he felt that this was an intentional design decision.

Modern games typically include some form of radar which avoids this problem, however several respondents suggested ways in which the captions themselves could be modified to provide more spatial information. These included providing better descriptions stating whether the sound was “footsteps from afar” or “footsteps real close” (“George”, “Julie”), positioning the captions to the left or right hand side of the screen depending on the direction of the npc (“Susan”), and altering the captions visually to indicate how loud the footstep sound was, and in doing so the proximity of the npc. (“Charlie”)


Respondents were next asked if the captions made the game more enjoyable and/or if they found them intrusive. Overall, the deaf gamers reported that the captions either made the game more enjoyable since they were made aware of sounds in the environment (“Susan”, “Julie”), or that they made no difference because they were “traditional captions” that deaf gamers were used to. (“Brian”) They were also described as adding realism to the game (“Julie”).

The responses from hearing respondents were varied, with comments ranging from “no less enjoyable” (“Jason”), “a little distracting” (“David”), and enjoyable, but only for important sounds, captions for other sounds were annoying. (“Joe”) One respondent, “David” suggested that he would turn captions off to increase his immersion in the game, stating that he did not think he would enjoy a game which relied solely on captions. Meanwhile, another respondent, “Jason”, suggested making the caption box transparent to minimise its intrusion. One gamer found that the combination of sound and captions helped him navigate better:

Despite having my volume high it [captions] helped to understand what the sound was. I found that it made it less frustrating and easier to understand the virtual environment. You could argue it even helped me navigate because the rooms looked the same and hearing that music reminded me of where I was. (“George”)

The “added value” between sound and image was picked up on by other hearing gamers, including “Joe” who suggested that the captions would not have meant anything on their own. Another respondent, “Charlie”, suggested that the dripping caption helped to bring the dripping sound to his attention, suggesting that there may be more actions to perform in the bathroom. This is evidence of the magnetization of sound by image that Chion described. (Chion, 1994: 69)


The affect of captions 

Based on their experience with the captions, respondents were asked to describe the mood they associated with the different sounds and the locations in which they were found. From the findings, it can be observed that most people described the sudden, sharp sounds such as bells ringing and footsteps approaching, in terms of negative affect. Words like “ominous” and “tense” were commonly used. In contrast, sounds such as the stereo music, were described as “relaxing.” These sounds are rounder, fuller, and similar to what Norman would describe as the “soothing” sounds we are naturally attracted to. (Norman, 2008: 29)

One respondent suggested that the sounds of the lounge made it seem
“frequented” (“Charlie”), no doubt related to the notion of sound temporalizing image as Chion suggests. Another, however, felt instead that it was ominous, owing to the absence of people. (“Susan”) “Michael” was the only respondent who did not associate any of the captions with mood.

Some respondents found the dripping shower head in the bathroom to be “calming” and “atmospheric” (“George”, “David”), while others found it annoying. (Joe) This would suggest some people found the regular pulse comforting, while others grew tense, waiting for variation, as Chion suggests. (Chion, 1994: 15)

The sound effects also had an affect on players’ risk assessment. In contrast with the ominous footfalls of the janitors, some respondents found the music from the cleaner’s headphones made him less threatening, since players assumed he could not hear their footsteps. (“Brian”, “Susan”)

Sound Effects – Reported mood  

Janitors’ Footsteps

  • Ominous, Scary, Tense

Stereo Music

  • Calming, Relaxed, Bland, Casual, ominous, homely

Coffee Pot Bubbling

  • Happy, familiar

Cleaner’s Headphones

  • Annoying, Energetic, Funky, Playful, Up Beat, calming, safe

Showerhead Dripping

  • Calming, Atmospheric, distracting, tense, annoying

Sinks/Showers Running

  • Scary, distracting, exuberant, nervous, playful

School Bell Ringing

  • Surprise, Scared, Startling, anticipation, alarming

Boiler Room Hum

  • Tense, creepy

Telephone Ringing

  • Surprise, confusing

Area – Reported mood



  • Quiet, Tense, Eerie, Empty, Casual, imposing, scary, cold

Teacher’s Lounge

  • Musical, Calm, Relaxing, Frequented, Casual


  • Safe, Playful, tense, exciting, wet

Boiler Room

  • Scary, Overwhelming, Tense, Foreign, anxious


The usability of symbolic visuals 

Overall respondents found the symbolic visuals more helpful than captions since they gave an indication of the distance to sound sources, making it easier for players to avoid the npcs. (“George”) However, they still lacked any feedback about direction which players found confusing, especially for the footsteps. (“David”) One respondent suggested showing footsteps on either side of the screen to indicate direction. (“Michael”)

Almost all the respondents found the visual system fun and engaging, with “Susan” saying she appreciated the more visual focus, and “Brian” commenting that he finds it easier to “process information when it’s a recognisable symbol rather than a block of text, [which] can be critical in games, especially twitch types”. He also said that he preferred the visuals since they were a representation of sound, rather than text. This can be understood to mean that he found the visuals more expressive since they tried to emulate properties of the sound itself, rather than simply providing a description of what the sound was.

“Michael” added that, while he personally did not find the visuals intrusive, some gamers might since the visuals were not confined to the bottom of the screen like subtitles normally are. This was certainly the case with a couple of gamers who found the overlapping visuals of the HUD distracting and difficult to ‘read’, in contrast to the pleasant visuals in the game environment. (“David”, “James”)

“Michael” suggested removing the sounds with in-game visuals from the HUD since these, he felt, were not necessary for non-essential sounds. This makes a lot of sense and is beneficial in cases where sounds would be difficult to convey as a symbol and in the limited screen space available in the HUD. One such problem sound was the boiler humming in the basement. This was represented by a set of rotating gears which many respondents found difficult to interpret since this did not represent the pervasiveness of the sound or the physical sense of a boiler. (“David”, “Jason”, “Susan”) More importantly, as “David” pointed out, gears are commonly used to indicate saving/loading in games, therefore such as visual is problematic.

The affect of symbolic visuals 

There was not a radical change in the answers to questions regarding the mood of sounds and their area in this version of the game, but this may be attributed to a lack of appropriate words to use for these sources. Some mood changes can be seen in sounds which played a minor role in the captioned version, such as the coffee pot and the boiler room. The coffee pot is now described as “amusing” and “energetic”, suggesting the liveliness you would expect from its bubbling, and, as “Brian” notes, this combined with the floating musical notes from the stereo, really amplified the existing mood. “Jason” comments that in this version, the lounge has a sense of calm mixed with “forbidden-ness.” This could be explained as a kind of hall of wonder effect when players entering the room are greeted by the rising musical notes.

The boiler room too, has undergone a mood change when using the visuals. “James” describes the gears in HUD as being less ominous than the caption “Boiler Hums.” In contrast, “Susan” thought that the gears made the boiler room more oppressive, most likely due to the ‘cog in the machine’ concepts associated with gears.

Sound Effects – Reported mood


Janitors’ Footsteps

  • alerting, tense, ominous, scary

Stereo Music

  • entertaining, calming, composed, generic, intense, playful, fun

Coffee Pot Bubbling

  • relaxing, calming, amusing, energetic, happy

Cleaner’s Headphones

  • calming, playful, funky, reassuring

Showerhead Dripping

  • edgy, tense

Sinks/Showers Running

  • exciting, playful, distracting, tense

School Bell Ringing

  • tense, alarming, anticipation, surprising

Boiler Room Hum

  • nervous, busy, ominous

Telephone Ringing

  • confusing, anticipation, alarming

Area – Reported mood 


  • scary, solemn, quiet, casual, tense

Teacher’s Lounge

  • playful, jolly, casual, calming, reassuring


  • exciting, calm, playful, tense

Boiler Room

  • tense, foreign, ominous, oppressive

Overall, the response to the visual system was positive. More promising was a remark made by “Michael” who said: “Symbolic visuals looked like the game had simply been designed that way, rather than the more distracting/obtrusive subtitles.” This suggests that symbolic visuals are the right way to go if we are to design games aimed at all gamers, rather than special deaf-oriented games.

The usability of combined visuals 

Respondents found this system the most helpful, since it provided good feedback about NPC’s locations all the time, but also the most intrusive. The sound balloons were felt to be more “imposing” (“Julie”), obscuring the player’s view of the game environment (“George”), and they lacked the energy of the symbolic visuals (“Susan”).

Players also found many of the sound descriptions used in this version did not accurately represent their sounds. The use of onomatopoeia was stretched beyond association and so certain sounds, such “knock” for the footsteps, made no sense to players. A better approach, suggests “David”, would be to use symbols within the balloons.

There were language problems caused by the use of onomatopoeia too, since meanings vary across cultures. “Jason”, as an American, pointed this out:

Maybe it’s because I’m American and the differences in onomatopoeias tend to be so major between cultures – just look up Japanese onomatopoeias and you’ll quickly learn that roosters don’t always say “cock-a-doodle-do”. (Jason)

The affect of combined visuals 

For the most part, respondents found this system annoying, however there are signs that the moods associated with sounds in the previous systems have started to shift in this version. In particular, the footsteps were described as “calm” and “relaxed” for the first time, and the music from the cleaner’s headphones was described as “aggressive”, which was closer to the music itself.


Sound Effects – Reported mood 

Janitors’ Footsteps

  • Impending, Tense, Distracting, Calm, Relaxed, Jarring, Weird

Stereo Music

  • Irritating, Jarring, Whimsical, Casual, Fruity, Gay, Bubbly, Flamboyant

Coffee Pot Bubbling

  • Funny, Whimsical, Amusing

Cleaner’s Headphones

  • Confusing, Energetic, Funky, Playful, Aggressive

Showerhead Dripping

  • Calming, Ominous, Amusing, Gentle, Peaceful

Sinks/Showers Running

  • Playful, Entertaining, Alarming, Amusing, Distracting, Gentle

School Bell Ringing

  • Alerting, Alarming, Startling, Anticipation

Boiler Room Hum

  • Tense, Ominous

Telephone Ringing

  • Edgy, Alarming, Anticipation, Surprising

Area – Reported mood 


  • Alarming, Tense, Eerie, Casual, Calm, Solemn

Teacher’s Lounge

  • Annoying, Whimsical, Casual, Relaxed, Out- of-Place


  • Exciting, Tense, Playful

Boiler Room

  • Tense, Oppressive, Ominous, Foreign

Overall, most players preferred either the captions or the symbolic visuals to this system, with some recommending a combination of the two. The spatial awareness limitations of captions were alleviated somewhat by the symbolic visuals, but there still remained usability problems without information on the direction sounds were coming from. This third system provided better directional awareness, but it was felt to be too overt and not as polished as the second system, as “Michael” explains:

Version 2 [symbolic visuals] seemed the most integrated, and seemed the most like an “ordinary” game. I didn’t see anything that screamed “assistive technology” game […] I could see a more finished version of this game being enjoyed by anyone without realising they were playing a game designed to visualise sound specifically. (Michael)


In conclusion, this project has explored an alternative to using captions in computer games, namely sound visualisations, and their capacity to suggest mood to deaf gamers. To ground this project, the role of sound in games was examined using Michel Chion’s writings in Audio-Vision: Sound on Screen (1994), and Kristine Jorgensen’s article ‘Left in the Dark: Playing Computer Games with the Sound Turned Off’ (2008). Additionally, Donald Norman’s theories on Affect were discussed, and how these might be used to design an affective sound visualisation system. With the deaf gamer in mind, Grammenos and Savidis article on Universal Accessibility (2006) was discussed with a view to developing a system that promotes a positive deaf identity.

Three visualisation systems were prototyped and playtested by members of several online forums, and their experiences were recorded using self-completion questionnaires. Captions were found to lack basic usability functions and were difficult for many players to assess in terms of mood. Symbolic visualisations were the most popular among respondents, followed by the directional aid found in the third system, combined visualisations. Of all three systems, symbolic visualisations were found to be the most affective, with nearly all respondents preferring this system over captions, albeit with some minor usability additions.

Further work 

Sound visualisations as a research area is ripe for exploration, alternatives to captions more so. Further research should be conducted to find the balance between an accessible system and an intrusive one. In the realm of visuals alone there are countless stylistic choices that can be explored. The systems demonstrated in this project are by no means exhaustive. Moving beyond the realms of sound and image, there are also force feedback devices to be considered and how these might be used to benefit the deaf gamer. The ideal to bear in mind is a system which is inclusive for both hearing and deaf gamers so that they can play together on a level field.

Word Count: 7, 936




American Evaluation Association (AEA) (1994) ‘Guiding principles for Evaluators.’ In Hall, I. and Hall, D. (2004) Evaluation and Social Research: Introducing small-scale practice. USA: Palgrave MacMillan. pp. 81.

Bierre, K. et al (2005) Game Not Over: Accessibility Issues in Video Games. [online] Available at: http://www.igda.org/accessibility/HCII2005_GAC.pdf Date accessed: 10 May 2009.

Buxton, B. (1994) Auditory Interfaces: The Use of Non-Speech Audio at the Interface. [online] Available at: http://www.billbuxton.com/Audio.TOC.html Date accessed: 12 May 2009.

Bridgett, R (2005) Diegetic Devices. Develop Magazine, January 2005. [online] Available At:

http://www3.telus.net/public/kbridget/diegetic.htm Accessed on: 12th May 2010.

Chion, M. (1994) Audio-Vision: Sound on Screen. Translated from French, by C. Gorbman. New York: Columbia University Press. (Originally published in 1990).

Clark, J. (2009) Understanding Captions and Subtitles. [online] Available at: http:// screenfont.ca/learn/ Date accessed: 14 May 2009.


Collins, K. (2008) Game Sound: An Introduction to the History, Theory and Practice of Video Game Music and Sound Design. USA: The MIT Press.

de Souza, C. S. (2005) The Semiotic Engineering of Human-Computer Interaction. USA: The MIT Press.

deafgamers.com (2010) Deafgamers Website. [online] URL: http://www.deafgamers.com/ Date Accessed: 12th May 2010.

Grammenos, D. and Savidis, A. (2006) Unified Design of Universally Accessible Games (Say What?) [online] Available at: http://www.gamasutra.com/features/20061207/ grammenos_01.shtml Date accessed: 14 May 2009.

Hall, I. and Hall, D. (2004) Evaluation and Social Research: Introducing small-scale practice. USA: Palgrave MacMillan.

Hole, R. (2007) ‘Working between languages and cultures: Issues of Representation, Voice and Authority intensified.’ Qualitative Inquiry. 13(5), pp. 696-710.

International Game Developers Association (IGDA). (2004) Accessibility in Games: Motivations and Approaches. [online] Available at: http://www.igda.org/accessibility/ IGDA_Accessibility_WhitePaper.pdf Date accessed: 10 May 2009.


Jørgensen, K. (2008) Left in the Dark. Playing Computer Games with the Sound Turned Off, in Collins, Karen (ed.): From Pac-Man to Pop Music. Aldershot etc: Ashgate.
Kimball, R. (2005) Games need Captioning. [online] Available at: http:// gamescc.rbkdesign.com/ Date accessed: 15 May 2009.

Kimball, R. (2005) Interview with Valve’s Yahn Bernier. [online] Available at: http:// gamescc.rbkdesign.com/arti-views/valve_interview_cc.php Date accessed: 12 April 2010.

Kimball, R. (2006) Interview with Valve’s Marc Laidlaw. [online] Available at: http:// gamescc.rbkdesign.com/arti-views/marc_laidlaw_cc.php Date accessed: 12 April 2010

Ladd, P. (2007) Understanding Deaf Culture – In Search of Deafhood. UK: Multilingual Matters Ltd

Norman, D. (2004) Emotional design: Why we love (or hate) everyday things. New York: Basic Books.

Sefton, M (2004) Doom3 [cc]. [online] Available at: http:// doom3cc.planetdoom.gamespy.com/about.htm Date accessed: 15 May 2009.

Stockburger, A (2003) The Game Environment from an Auditive Perspective. [online] Available at http://www.audiogames.net/pics/upload/gameenvironment.htm Date Accessed: 12th May 2010.


van Gorp, T. (2008) Design for Emotion and Flow. [online] Available at: http:// http://www.boxesandarrows.com/view/design-for-emotion Date accessed: 7 March 2010.

van Tol, R. (2006) The Sound Alternative. [online] Available at: http://www.game- accessibility.com/index.php?pagefile=soundalternative Date accessed: 12 May 2009.


Activision (2005) Doom 3.

Maxis (2000-2010) Various Versions of The Sims.

Rockstar (2008) Grand Theft Auto IV.

Sonic Team (2000) Phantasy Star Online.

Sony Computer Entertainment (2008) Little Big Planet.

Ubisoft Montreal (2007) Assassin’s Creed.

Valve (1998) Half Life.

Valve (2004) Half Life 2.

Valve (2007) Team Fortress 2.


Copyright © 2010 Daniel Mclaughlan