I’m doing a relatively simple post today because I’ve had way too many milestones and sleepless nights in the past 7 days. This is why I’m going to talk about a very basic concept for graphics programming: doing image space calculations and effects. I realize that there are many seasoned game developers that frequent this blog, but this one is for the ones who are just getting started.

Tools and Motivation

As a developer of a real-time graphics technology (video games), you are almost certainly going to be making use of a GPU to accelerate your graphics processing. The GPU exploits high parallelization of rendering to speed up its work, and is responsible for transforming your 3D world into a 2D plane that is displayed to the player on their computer scene. However, not all effects are easy to simulate in a 3 dimensional space. Thats where leveraging further work on the GPU to perform additional image space calculations can come in handy. Some of these are quite obvious, such as depth of field calculations, being that depth of field is an artifact of lenses. Other popular image space effects include motion blur, color correction, and anti-aliasing. Image space calculations are also the foundations of deferred shading and lighting, rendering techniques that are becoming increasingly popular to handle a large number of lights in a scene.

The Actual Technique

Image space calculations can be performed on the CPU, but as with most things in graphics that would slow and booorrrriiinng (unless you’re playing with SPUs, in which case carry on). The main point is that iterating over an image in the main thread while performing per-pixel calculations on the CPU would most likely be in bad taste. Instead you might consider the following GPU based solution:

  1. Render your scene normally, except to a texture if you are not already.
  2. Render a quad that covers the entire screen, with the texture from the previous step as a texture applied across the quad.
  3. Perform your calculations in the shader code used to render that full screen quad, modifying the how the texture is applied.

There are several catches with this that you have to keep in mind. First and foremost, each fragment’s calculations cannot be too dependent on other locations on the screen. If you have to sample your frame many times, then those texture accesses will quickly add up, which is one of the big considerations that comes into play when performing screen space blurs. Secondly, while this might provide an effect for much cheaper than trying to model a similar effect in 3 dimensional space, keep in mind the actual performance is dependent on the resolution of the screen, which may be less than desirable. As resolution increases, so does the number of fragments being processed. In the end, it’s all about picking when and where to perform different calculations in your game.

An Example

Here’s a sample of a very simple post-processing fragment shader, written in CG. It does a simple screen space distortion based off of the x and y channels of a texture. The vertex shader doesn’t do anything particularly special other than set up uvs that are interpolated from 0 to 1 across the quad. In general, most of the action happens in the  fragment shader when doing post-processing.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  
uniform sampler2D _MainTex;
 
  uniform sampler2D _DistortionMap;
 
  uniform float _Distortion;
 
   
 
  float4 frag (v2f i) : COLOR
 
  {
 
      float2 distortedOffset = (tex2D(_DistortionMap, i.uv).xy * 2 - 1);
 
      distortedOffset *= _Distortion;
 
   
 
      float2 distortedUV = i.uv + distortedOffset;
 
      return tex2D(_MainTex, distortedUV);
 
  }

The shader itself is really only a few lines of code! So easy! Here’s what is happening a little more in depth. The float2 “distortedOffset” is simply the interpolated uv coordinate plus a lookup into the normal map which is then unpacked to fit into the range [-1,1] instead of the [0,1] range returned by tex2D(sampler2D, float2), which is then finally multiplied by _Distortion to control the strength of the distortion. Finally, a lookup into sampler2D MainTex is performed, where _MainTex is the previously rendered image. If there is no distortion, then the call would be equivalent to tex2D(_MainTex, i.uv), which would just copy the source image’s color to the new target. Speaking of targets, you might consider rendering this post-processing pass  into a texture as well, besides just your initial rendering of your 3D scene. This is so that you can pump the output of this post-processing into another post process you are implementing to be able to stack effects on top of each other.

Here is a sample of this particular distortion shader in action.
The original rendered scene:

Before Post-Processing

Texture that the distortion is calculated from in the shader:

Distortion Map

The final result:

After Post-Processing

Conclusion

Great success! The important question here should always be: how hard/expensive would it be to achieve the same effect in a different space? What do you gain/lose by doing it in image space? And also important is the question of whether or not this actually makes your game look any better. In the end, I personally think that post processing is great fun, especially used in terrible crazy ways on personal projects. You never know what you’ll come up with when you play around with ideas in a different space, for example here’s one paper exploring the possibility of moving skin rendering into screen space: