Game Audio Learning Portal

View Original

How Audio Is Manipulated In Games

Overview

In the same ways we process sounds in our DAW, modern game engines and middleware allow us to change the properties of sounds during runtime (while the game is playing), based on what is happening at any given moment. We call this a dynamic mix, as the sounds and their properties, change over time rather than being fixed and not changing throughout.

The fact that we can do this is pretty awesome for the following reasons:

  1. We can create more believable worlds by simulating the acoustics of the spaces portrayed on screen. This also helps players better identify where sounds are coming from and how close or far away they are (this is called spatialisation).

  2. We can achieve better clarity by dynamically adjusting the volume of different sounds, to make sure that the player always hears the most important sounds like dialogue.

  3. We can add processes to existing sounds to create immersive effects, for example, low-passing all of the sounds and playing a high ringing tone when a grenade goes off close in close proximity. The filter being applied in real time avoids having to have a bunch of low-passed sounds in the game’s files for this one specific context.

  4. We can randomise the volume and pitch of sounds to create variations and avoid repetition.

These are just a couple of the benefits of in-game processing, but let’s have a closer look at the actual practical applications.

First, it’s important to reiterate that games are all smoke and mirrors. To successfully pull off the illusion and make a soundscape believable, we have to make it reactive, and we can do that by using the following tools that are available to us.

The Tools (Quickstart Guide)

Automation

If you’ve designed sounds then you have probably changed the volume or the pitch through automation. It functions in the exact same way in game engines and middleware, letting you automate things like the volume of a sound through in-game values like a health bar. For example, the lower the player's health, the louder they hear a heartbeat sound effect. You can automate a bunch of things so get creative with it!

Reverb

We can apply reverb to sounds at runtime to mimic the acoustics of the space the player sees on screen. This can be done in multiple ways, one of them being through the use of reverb zones. These are essentially 3d volumes set up by the sound designers to match the spaces and rooms in the game. If the player is inside one of the volumes, the reverb for that zone is triggered and some or all of the sounds are processed by it.

There are more complex and realistic ways that game engines can utilise reverb. One of them is by sending out rays that check where the player is located and how far away they are from the surfaces around them, such as the walls and ceiling, what the surrounding materials are, etc. Based on that information, the reverb type and amount gets applied in real-time.

Attenuation

Attenuation, allows us to simulate the distance of sound sources from the listener by changing the volume and stereo spread, and adding low- or high-pass filters depending on the proximity of the source.

In simple terms: if the sound source is far away, its volume will be lower and it will sound more muffled than if the sound is happening right in front of you. When the sound is too far away we won’t hear it anymore.

It is common to set a minimum volume for some sounds, however, so that no matter how far away the sound source is the player will always hear it faintly. This is very useful for important sounds such as game dialogue when the player moves far away from the NPC but still needs to hear what they say.

Randomisation

Randomisation allows us to set maximum and minimum values for parameters like pitch and volume for a sound. Each time the sound plays it will select a random value between the maximum and minimum, which helps make sounds that we hear a lot less repetitive and allow us to get more variations out of limited source material.

If, for example, we only have 5 footstep sounds, we could split those sounds between heel and toe (the first and last half of each sound), and then play back one heel and toe at random each time the sound is needed, randomising the pitch and volume of every individual sound.

Obstruction & Occlusion

These two terms refer to how sounds change when travelling through or around objects and surfaces.

Obstruction is when something is obstructing the direct path between the sound source and the listener, but not fully blocking it (for example someone shouting from around a corner). This means that the sound is slightly muffled through a lowpass filter, but the reverb applied to the sound is not filtered.

Occlusion is when the path from the sound source to the listener is completely blocked (for example someone standing shouting from behind a wall). In this case, both the sound and the reverb would both be muffled by a lowpass filter.

Check out these great graphics from Audiokinetic which help to visualise the difference: https://www.audiokinetic.com/library/2017.2.10_6745/?source=WwiseProjectAdventure&id=obstruction_and_occlusion

The Application

Now that we have familiarised ourselves with some of the tools, let’s take a look at how they are actually used in the context of a game: https://youtu.be/gZ991uRSlMw

Advanced Resources (Technical Sound Designer)

A look at advanced audio functions in Unreal Engine: https://www.asoundeffect.com/unreal-engine-game-audio/

An in-depth explanation of attenuation and its various uses with great graphics: https://docs.unrealengine.com/4.27/en-US/WorkingWithAudio/DistanceModelAttenuation/

How the environmental acoustics (reverbs, ambiences and the weather system) were created in Tom Clancy’s The Division 2: https://youtu.be/7ME1CZyYNhg

How Hitman 2 improved its reverb systems: https://blog.audiokinetic.com/hitman2-enhancing-reverb-on-modern-cpus/

The impressive environmental acoustics of Gears of War 4: https://youtu.be/qCUEGvIgco8