In my previous post, I wrote about why there is Black & White 2. My headline summary would be that computational geometry is both a powerful and complicated tool, and you should therefore seriously consider it but approach with some caution. Identifying the problems that are safe to tackle and those that are still research problems is not obvious. Some simple sounding extensions to well known algorithms can quickly take you into no-man’s land. But the pay off for a geometric approach can be significant. Picking the right problem together with the right algorithm can produce highly efficient and compact solutions. In this post, I’ll cover a few areas to avoid.

For the sake of brevity, but ignoring many details, the navigation systems we built at Intrepid and Lionhead looked broadly similar to other mesh-based navigation systems, such as PathEngine. There was a mesh that described the navigable search, and actors found paths and traversed them. I would still prefer this approach of using a navigable mesh if I was to return to navigation tomorrow. The common alternatives of storing only blocking geometry or using a graph of path nodes may be simpler, but we got significant additional mileage out of having an explicit query-able representation of the free space, so I believe it’s worth the extra complexity. There were a typical list of design criteria for each version of the navigation system, but the most important one for this discussion was that both of them should be “robust”. Both in the sense that they never shattered the player’s immersion with daft looking AI, and in the sense that all game code is mission critical and therefore cannot be too expensive, too cpu hungry, and should never ever crash. Good navigation should be invisible.

Accuracy didn’t appear in the design criteria, but the term “accurate” in the context of computation geometry is frequently another way of saying “functional”. Inaccurate geometric algorithms don’t just return fuzzy results, they tend to crash. This is an inherent issue with the computational approach. In most geometric algorithms the decision-making aspects place assumptions on the state of the geometric data for correct and efficient operation. If a triangle is flipped due to an inaccurately computed position, an algorithm that depends upon the winding order to operate will fail. In general, topological changes in your geometry will cause algorithms to fail, and inaccuracy in computing vertices can give rise to topological changes. So even if your application can tolerate a reasonable degree of numerical inaccuracy, in order for many computational geometry algorithms to work in a game context, the requirement to be robust also implies the need for accuracy.

Floating point arithmetic was not designed for the kind of accuracy requirements computational geometry require. They are non-associative ((a*b)*c != a*(b*c)), tend to be affected by optimisations, and the accuracy of a given computation is impractically hard to predict ahead of time. There is an interesting paper by Jonathan Shewchuk, Robust Adaptive Floating-Point Geometric Predicates”, which describes an approach to making floating point computation robust for certain geometric calculations, but it’s non-trivial, not completely portable, and I would therefore hesitate before recommending it. Otherwise, there are 3 rough (non-exhaustive) categories of solutions:

- Adopt some form of ‘infinite precision’ representation;
- Restrict your calculations to a particular integer range;
- Avoid scenarios where you need to answer this question.

It should be clear that the most preferable solution is to avoid situations where accuracy becomes a stability issue, but if you adopt this approach too strictly you effectively rule out the majority of computational geometry routines and the efficiency they bring.

Some routines are not inherently problematic. Convex hull algorithms, or triangulation algorithms can be made completely robust by working with integer coordinates (and may even be fine with floats for some uses). The most troubling problems involve accurately computing **intersection points**, which may be required as an inherent part of an algorithm (e.g. computation solid geometry (CSG), aka. boolean geometry operations), but also as pre-process to remove overlaps before another simpler algorithm can accept the input. For example, most 2D triangulation algorithms require non-overlapping line segments as input, so even if they are easy to make accurate, they may also have just pushed the problem further up the pipeline.

The reason intersection points are a problem is that even if you start with your vertices at integer coordinates, an intersection point of two lines will not in general be an integer. If you fail to represent the intersection point accurately you may introduce a topological change. The remaining solutions therefore boil down to whether you try to accurately represent the intersection, or whether you force a topological change in a way you can predict and manage.

I’m not a big fan of ‘infinite precision’ representations for use in games. They are extremely convenient in some settings, but they are expensive and are not necessarily a fix for robust operation by themselves. In order for such a representation to be robust you need to be able to guarantee that the amount of precision you require is bounded. If your algorithm generates new vertices as it goes along, and then loops to generate further vertices based on the previously generated ones you may find you require ever increasing amount of precision. If you can bound the amount of precision you require, then there are a few papers in the games industry press that discuss workable solutions, such as ”Using Vector Factions for Exact Geometry“, by Thomas Young in Game Programming Gems 3, for example.

For Black & White 2, the player effectively had the ability to draw on the navigation mesh. They would place building and draw roads, arrange dynamic objects such as rocks and trees, and cast landscape-altering miracles. All of these operations triggered CSG-like modifications to the underlying mesh. We would resolve all line intersections, re-triangulate the affected areas of the map, flood fill regions with navigation codes, and update the concurrently navigating villages and armies. The result was an efficient solution, particularly compared to an early multi-res grid approach, but was a complex system even by recent standards.

To make our approach robust, we had opted for a 16-bit 2D integer representation and “snapped” all intersections onto this grid as they occurred. The snapping was based on “Snap Rounding”, which is a means of intersecting a set of line segments in a way that can handle the topological changes that the rounding will introduce. The snapping kept a tight bound on our precision, even after continued modification by the player, and allowed us to make guarantees about the accuracy – and therefore robustness – of the system. The 16-bit resolution turned out to be more than sufficient, even for the large terrain in a God-game. An attractive side effect of the approach meant we could reduce the resolution without effecting the robustness of the code, which also allowed us to simple means of stress testing corner cases.

However, alongside the importance of accuracy for correct operation, the second great Achilles’ Heel of computational geometry is that it can be **hard to compose**. Joining algorithms together in a pipeline is not hard, but combining algorithms together can involve developing a new algorithm. Snap rounding by itself is fiddly, but not a major challenge to a good programmer. Delaunay triangulation by itself is also fiddly but will not induce excessive panic. An incremental form of Delaunay Triangulation that uses snap rounding to resolve intersections and also maintains application-generated triangle attributes, global connectivity information and concurrent searches is – unsurprisingly – greater than the sum of it’s parts, and at least a man-year of effort.

The result worked very well, met the (typically) ambitious game design, and I expect it was more efficient than the simpler alternatives. But game development is a tough business. I think you need a very special case to justify this kind of effort, even if the compromise is hard to take.