Well, it’s been awhile since our previous installment, but we’re back with the third and final part in our Interactive Audio Fundamentals series. In Part 1, we took a look at the power of layers to massively increase variation. In Part 2, we delved into automation and how we could connect game values to various parameters to make our world come alive.

This time out, we’ll wrap up by covering a few specific uses of effects, simple duckers, and quantizing everything.

Low Pass Filters

The low pass filter is one of the most commonly used effects because it is relatively lightweight from a CPU standpoint and very versatile in how it can be applied. For visiting non-audio people, a low-pass filter is a filter that allows low-frequencies to ‘pass through’ the filter, while filtering out higher frequencies. It generally makes things sound muffled.

That muffling can be used to for occlusion in 3D environments, to help accentuate intensity, and as a tool to generate further variation. Automating the cutoff frequency helps the effect feel dynamic and less canned.

For example, if you have some kind of impact sound, and an intensity parameter – you can link it to the cutoff frequency. High intensity impacts get the full unfiltered sound, but a low intensity impact gets just the low frequency elements, as if you just ‘nudged’ it, with none of the clatter.

An LPF might seem like a bit of a blunt instrument on the surface, but once you understand its applications, it is a highly effective, extremely versatile tool that can be used in a wide variety of situations.


In the absence of a compressor in your mix chain, you can use some simple volume automation to create duckers to control your mix. A ducker is simply a trigger or piece of logic that controls the volume of one element in response to either a game event, or some other piece of audio triggering. You can think of it like side-chaining a compressor.

Duckers are frequently used to accentuate dialog – so that everything else backs off while characters are speaking. In its simplest form, you simply ramp down the volume of everything else in the game by 10 or 20% while speech is playing. This lets the speech cut through the mix a bit more, with a really obvious dip in the mix.

The function of a ducker is really quite similar to that of a compressor – it is used to control the dynamic range (albeit in a very rudimentary way) of a particular element, and to help bring focus to some part of the mix. You could use compressors with side-chains to achieve the same result, but that is not always available on all platforms, and can start to become CPU intensive depending on the effect you’re trying to achieve. Ramping volumes between targets is cheap and effective.


Many games have an inherent rhythm to their gameplay, so it can help make the entire experience gel if you use music to your advantage. If your game has a soundtrack, you can map out the tempo of the song, and run a metronome alongside the track to keep time, and then quantize all of your sound effects to the music. What does that mean exactly? It means snapping sounds to given musical intervals. Depending on the tempo of the song, the exact musical subdivision you would use varies, but the principle is that if you take this approach, every single sound in the game will play back rhythmically with its soundtrack. Generally, you just have to apply a small delay to every sound to ensure that it plays back on a musically appropriate beat.

This is obviously a pretty subtle effect – but while players might not necessarily be able to describe exactly what is going on, they will definitely notice it, because everything will sit together in a tight way.

Wrapping it all up

We’ve covered a bunch of topics in this little series, and all at a very high level – but I think importantly, the topics cover some of the fundamental principles of interactive audio for games and how you go from raw samples to audio that comes alive in your game. The key is to understand the fundamental principles and tools that are available to create dynamic audio worlds, and then to apply those creatively to your content.