Last time we looked at layers and how we can massively increase the variation in our sounds and give ourselves more control and more dynamic options by breaking our sound into layers. We saw that turning on and off individual layers can be a powerful way to alter the character of a sound at runtime without having to create a completely different sample.
This week we’ll take a look at automation. In multitrack software, automation is used to control the various parameters of track: volume, pan, send levels, and various effect parameters. The same principle applies in games, with the added challenge of having to be dynamic. In your DAW, your music tracks are linear and you can just chart your volume vs. time, etc. – but in a game, you can’t just lay it out in a predetermined timeline and call it a day, instead you have to create a dynamic mapping to automate parameters in real time.
Automation Parameters
Basically anything that has a knob or a slider can likely be automated, although certain things are more suited to automation than others. In general, anything that might have stepped or discrete values will not work as well as an analog control.
Volume is the most obvious and common value to be automated. In its simplest form, volume automation is the fade in/out, and in most cases those fades are hard coded on a set timeline as well – very much identical to the kind of fade you apply in your DAW. However, you don’t have to venture very far to encounter volume automation that moves beyond the typical baked fades. Dynamic crossfades based on some game value are probably the most common use of volume automation. Consider distance – the volume of a source is dynamically altered based on the distance from that source. Intensity might control two layers of an ambience, with one decreasing in volume as the intensity increases, the other increasing in volume as the intensity rises.
Pitch is probably the next most commonly automated parameter – from doppler effects, to simply using a tiny bit of pitch randomization to further increase variation. Don’t have memory for a lot of footstep samples? No problem, use one and pitch it dynamically every time it’s triggered. The other classic pitch automation application is in engine sounds – where you pitch up your sample in relation to the RPM of the engine. Combine this with volume automation to crossfade different engine samples under different loads, and you’re on your way to creating something dynamic and alive.
Pan is used extensively in 3D audio to locate sounds in space as they move around relative to the listener. Filter cutoffs are used in a wide array of situations to ‘muffle’ sounds, simulate occluded sounds, and to generally add variation to sound effects that differ in intensity.
You can start to see that just about anything you can think of can be automated and linked to some real time value. The possibilities for effect automation are staggering – dynamic distortion levels, reverb depth, delay length, resonance, even compressor thresholds and ratios can all have really substantial and interesting impacts on the feel and life of your sonic world.
Mapping Game Values
You can’t feed in the game values to the automation parameters directly, so you need to create a mapping from the game values to the automation parameter. In the same way that you automate your values vs. time in a DAW, in a game, you will map your values vs. a particular game value. Often that value will be in the range 0.0 – 1.0 (which makes programmers happy) – and then your output values are in whatever range is appropriate for the value you are automating (dB, Hz, degrees, cents, etc.). This gives sound designers the ability to finely tune how a sound’s volume rolls off based on distance, or how it ramps up based on intensity, or tweaking an engine’s transitions so that it smoothly ramps in a natural way.
Game values can be any value that would affect the sound itself – distance, intensity, ground type, room size, position, RPM, speed, force, etc. Any value that might be floating around in your engine that a programmer can get their hands on can likely be used to drive automation parameters for your audio. Using this model has the added benefit that from the programmer’s perspective, they simply need to normalize that value between 0.0-1.0, update it regularly and move on with their lives. This lets sound designers focus on tuning those values so that it sounds good, rather than bothering programmers to tweak some value in a table somewhere.
Bringing it to life
Automation is really your link to the game world. Without drawing on the game values and properties of the world, your sound effects are little more than canned wav files that don’t feed back to the player in a useful or realistic way. Audio is such a visceral part of the experience and critical to bringing the world to life that drawing on those values wherever possible to drive the playback parameters of sounds is key. It doesn’t have to be an extremely complex system to be effective: some simple volume automation linked to the right game value can be incredibly powerful.
In our final installment, we’ll delve deeper into how we can use various effects to achieve particular results, how we can use simple volume automation to create duckers that control your mix, and how quantizing your entire game can make everything gel.