I know that is a controversial statement, so before you object, hear me out.

“Future” is the key word here. Without an unexpected jump in technology, we probably will not see mainstream use of ray tracing this decade.

Ray Tracing

Click here to see the WebGL demo above.

In an (often-cited) interview, John Carmack once said “Head to head rasterization is just a vastly more efficient use of whatever transistors you have available. [In comparison to ray tracing]”

I think he is absolutely right. However, rasterization cannot be parallelized nearly as effectively as ray tracing, mostly due to the way depth sorting works. Without some insanely complex memory architecture, rasterization is forced to work triangle-by-triangle on any given buffer, because it must know whether a triangle is above or below any other triangle. In theory, you could parallelize it by comparing multiple triangles at any given time, but then you are tending towards ray tracing and giving up what makes rasterization so efficient (the same goes for order-independent buffers). You could also render to several buffers and then combine them, but that is probably not the best use of memory, and it would take considerable bandwidth to combine a significant number of buffers. Ray tracing, on the other hand, inherently works on a per-pixel basis. It does not matter what order the pixels are rendered in, and the results of any pixel are completely independent of the results of another pixel.

When it comes down to the number of instructions executed, ray tracing is probably grows by a linear magnitude of complexity in comparison to rasterization, without optimization or ray-bounces. However, I think with the proper underlying hardware, this complexity could be catered to with a growth in compute units. As hardware tends towards an increasing amount of cores, the case for ray tracing will likely become stronger.

There are many more things to consider though.

First of all, I think that ray tracing with triangles is not an effective use of ray tracing, as Carmack suggests. I also think that sparse voxel octrees are not necessarily the answer. The problem with triangles is that they begin to lose their value as they approach the size of a pixel (which they already are in many cases). The problem with sparse voxel octrees is that they are not efficient for storing curved or noisy surfaces. Both solutions are limited by the resolution of the underlying data, even with tessellation or super-sampling.

The main advantage of ray tracing, as I see it, is the ability to define arbitrary types of primitives or even extremely complex mathematical structures. As far as smooth surfaces go, I think the most efficient method of ray tracing might be to use a form of isosurfaces, that is, surfaces that are equidistant from a given set of points, lines, or other primitives (or even modified for varying distances) — not to be confused with metaballs. In combination with texture or parameter-based displacement-mapping, I think you might be able to achieve a wide variety of surfaces. Higher order primitives are also easier to work with because they are not subject to the same anomalies that lower-order ones are, like gaps, (counter)clockwise errors, and bad normals. Also, unlike triangles, higher-order primitives are not subject to strange interpolations like collections of triangles are, which can result in odd texture coordinates or lighting.

Defining things in a mathematical manner is the basis of procedural generation – it allows you to reduce file sizes and produce dynamic geometry that can vary with different parameters. Obviously, there is a ton of work to be done before such assets could compete with hand-crafted ones, and the tools are simply not there for producing ray-traceable objects based on various primitives. That said, triangles are very poorly suited for procedural generation, whereas isosurfaces are very well suited. The problem comes down to how simple it is to describe an object in mathematical terms – isosurfaces implicitly describe their own structure by simpler underlying primitives like lines or curves, whereas the entire surface must be explicitly defined when using triangles, in addition to the points of every triangle and how they connect.

Ray tracing has other advantages, obviously. The quality of reflections, refractions, shadows, and radiosity simply cannot be reproduced efficiently (or at all) with rasterization. Ray tracing can also produce solid objects and use boolean operations, which could have interesting effects (for scene cut-aways, object destruction, etc). Ray tracing also does not (inherently) support anti-aliasing or multi-sampling, because at a given point a ray either intersects or it does not – there is no partial intersection. From a programmer’s perspective, I think this is a good thing because it keeps pixels “pure” and prevents data loss – many post processing techniques do not work in conjunction with MSAA or require hacks. MSAA can always be applied as a post-process technique where needed.

I think at some point in time the loss in performance for ray-tracing will become small enough and the tools for it good enough that it will shift into mainstream use. This is not to say rasterization will (or should) ever be dismissed – I am sure there will always be uses for it, and hybrid approaches will likely be common. Most importantly, realtime ray tracing is still a very young area of study. We have had over a decade to produce all sorts of cool tricks with GPU rasterization, I am sure there are many similar hacks we could produce for ray tracing. I think that we also find many ways to effectively “fake” results and gain a lot of speed at the loss of a little accuracy (in particular, there may be efficient ways for summing up the propagation of light at any given point, although I am sure this is no small task). Regardless, we should not hesitate to experiment in this area on the basis of rasterization’s dominance – at the very least, offline rendering (for film and other applications) could greatly benefit.