With my apologies to the late great Ian Dury for the title, today I’m getting violent and hitting innocent triangles for fun :O

The Raytracer versus Rasterizer debate often portrays the two systems as radically different ideas, with one fundamentally better than the other. However to a large degree they do exactly the same job, just phrased differently. What they both try to solve is hits, which is one of the key aspects of many systems in a game, but with a particular importance and focus for the visual side of things.

I define a hit as the intersection between an extruded ray or volume and some geometry in the scene. To demonstrate lets set up a really simple scene (I’m sure we all wish our real scenes were this simple!), that consists of a render-target of 2×2 pixels (the checkerboard grid in the pictures) and 1 or 2 triangles (coloured red and blue). Green pointy cylinders represent intersection test, hitting if they have gone through the triangles.The ray tracer shown below, traces its rays from the render-target and each one separately calculates if it has intersected the triangle, in this picture, 3 hit, 1 missed.

4 rays leave a framebuffer (res 2x2) and 3 intersect with a triangleRaytracer firing into a scene, 3 hit the triangle

A rasterizer works the other way, effectively tracing from the triangle surface towards the render-target.

A triangle with 3 green hit indicators, intersects a 2x2 render-target, with 3 of the pixels being hitA rasterizer generates 3 hits on a render-target”

Even from this very simplistic version, its easy to see why a rasterizer can be seen as more efficient, only pixels in the render-target that are physical covered by each triangle are considered, whereas (in a naive) ray tracer, every pixel must test every triangle to know. In practice its various forms of coherence which make the real performance difference for both algorithms, however for now lets just call that an implementation issue and move on.

However this simple view quickly encounters a complication, when you have multiple triangles all hitting the same pixel, which one (or ones) deserve to actually colour the render-target?

2 triangles rasterize into a 2x2 frame buffer, which colour goes in where they overlap?2 triangles rasterize but collide where the overlap the same frame buffer pixel

Ray tracing or casting?

Ray casting generally refers to firing a single ray and recording where this primary ray has hit, whereas ray tracing generally refers to the more advanced use of secondary rays and also the lighting path tracing that goes with it. This article is some-where in the middle, whilst I talk about multiple hits, and therefore primary and secondary rays I’m not talking about any of the lighting calculation, purely about the geometric solutions rather than complete renderer. Same for rasterizers, this is purely about the hit determination and not about all the other goodies that go into making a decent rendered image.

Naive theory versus practise

Lot of the discussion in this article, is deliberately naive and the most simplest of implementation, this may cause some to say its unbalanced. Saying a ray tracer checks each triangle versus each pixel just isn’t how most ray tracer work, however it is true in the highest, simplest description. I’ve not intended to suggest that the discussion here is by any means a good implementation, indeed I’m trying at least to start with to ignore implementations as many of the problems have been solved by clever people and algorithms. Nobody would build a ray tracer or rasterizer like described here, except for educational purposes. However sometimes stripping down something to its most naked form, allows you to see further and clearer than before. At least that’s my hope…

Simplified Glossary for this only!

Ray – an infinitismal thin line with a start point tracing a long a vector.
Fragment – A hit from an intersection, but may not be related to final pixel colour in a render-target
Pixel – An area on a render-target that can be coloured from geometry in a scene.
Hit – An intersection on the surface of some piece of geometry along a trace vector to/from a pixel
Index of Refraction (IOR) – How light direction is altered when passing through translucent sufraces. 1 is unaltered from the input vector.
Render target – a rectangle grid of pixels that together form an image

To infinity and beyond!

Once you see the ray tracing and rasterization are two sides of the same coin, you can start to explore using the hardware we have to better effect. We have GPU with massively fast rasterizers, that also have huge ALU compute power which is ideal for ray intersection algorithms. Which is why this is all about hitting not how you determine the hit.

With high end D3D11 class hardware, we are almost full circle back to the software 3D days just with massive GPUs to help us. The question (that I suspect will take years to even begin to explore) is if, just using it to add a few effects to the traditional GPU / rasterizer pipeline (economically for a production engine this is all you can do at the moment) or rejecting rasterization and moving to ray tracing completely or something different from these two dominant paradigms, will be a path to better game visuals.

I personally have no doubt it will be the last option, something different but borrowing from many different existing paradigms including rasterization and ray tracing. I’ve been playing with my home research engine and top end D3D11 hardware as min spec, there are just too many possibilities. There are pages of forgotten papers and algorithms from the 70s and 80s, and ideas that haven’t been used probably due to the ‘insane’ expenses back before the massive CPUs and GPU we have today.

Hopefully I’ll get to describe them in future, but rasterizing without z-buffers sure makes us old gits feel young again, I’ve taken to listening to 80s music just to make the vibe right ;)

Whilst the picture covers the rasterizer case, ray tracing has the same problem but with one major advantage. As the ray tracer casts from the render-target through each triangle at each pixel, each pixel has a list of all the possible fragments that might make up its colour. So its fairly easy to choose some rule to decide, the simple obvious one is closest fragment to the pixel wins and the pixel takes on its colour.

Rasterizeration means each triangle fragment doesn’t know where fragments from other triangles are, so without some additional outside communication, the frame buffer pixel will simply have to be happy with submit order or random colouring. The Z or depth buffer is a simple solution to first hit determination for rasterization, each render-target pixel keeps track of the depth of the closest hit so far, if a new fragment hit is closer, then update the colour and depth, if its further away just discard that fragment. The Z buffer provides a way of each triangle communicating with every other triangle in an extremely limited way, saying per pixel if this triangle is closer than any other.

The result is that for first hit, both ray tracing and rasterization have what essentially the same solution, just sort the fragments and pick the closest, the main difference is that with rasterzation with a Z buffer, the pixel on the render-target only has access to the first hit, whereas in ray tracing it has access to all the fragments along the trace vector. Of course for first hit detection you don’t care about fragments further than the closest so ray-tracing has no advantage over rasterization for this case.

But when you do (transparency, volume effects, etc.) ray tracing has this nice property, except its mostly not as useful as often assumed. Most people claim ray-tracing naturally handles things like shadows, refraction and other advanced effects but if you’ve read above, its not clear where this natural ability comes from, the only major difference we’ve noted is that each pixel has a list of fragments along its trace vector. Which only helps for transparency which has an Index of Refraction of 1, that is when the transparency transfer colour but does not alter its path. To change its path, or to check another trace vector, you need another ray. Which is of course is what a ray tracer does, each hit can optionally fire off additional rays (secondary rays) that trace a different vector, check if its visible from a light source, or even just to add extra samples to improve the anti-aliasing. Its a nice system, but perhaps controversially its not any more natural to ray tracing than it is to rasterization, as in almost all the cases its used in ray tracing its purely to find a first hit along the newly spawned ray which both systems are equally capable of. From a high level point of view, its as natural to rasterize out to a new render-target along the new trace vector to obtain exactly the same info. Ignoring performance, its just as correct to “spawn render-target for trace”, as it is to “spawn ray for trace”.

In practice tracing a new ray is often much simpler and faster, which is why these effects are largely restricted to ray tracers currently.

However both sides of the coin, are starting to look at other ways of looking at the hit problem to perform better or add extra capabilities. And that is where things are starting to get interesting…