To any physicists reading: Sorry, there’s going to be next to no quantum mechanics in this post. I’m just riffing on the name of Everett’s theory, and will be using ‘worlds’ in a different sense to him. (Though by all means, talk to me about quantum mechanics – it’s an interesting topic. Do get in touch if you can cogently explain how the Copenhagen interpretation can function without suffering from some serious anthropic issues).

A widely-understood idea in games is the idea of the ‘game world.’ It is the virtual space in which the game takes place; it is the venue, the stage and backdrop. It’s full of interesting systems and processes: visuals, sound, motion, gravity, autonomous agents…

Yet while this world is presented to the player as a single cohesive, orchestrated whole, the reality of virtuality isn’t so singular. These different processes are frequently happening in their own independent worlds – like parallel dimensions that are deliberately overlapped. For example, what appears to be a highly detailed, carefully sculpted object in the visual world, may only be a flat-faced cube in the physics world.

The visual world

The physics world (artist's impression)

We consider them the same object because they share certain pieces of information. The position of the sculpture and the cube, for example, may be shared, such that as the cube is moved around by physical processes, the player perceives the sculpture to be moving. The sculpture itself has no mass or velocity; indeed it exists in a world where nothing has mass or velocity, and no processes exist that move objects around. But similarly, while the cube has mass and velocity, it has no textures or shaders, and so there is no way to really describe how it “looks.”

So objects in the game will have different representations in different ‘worlds.’ There may even be some worlds in which they have no representation at all. Invisible walls, for example, have a representation in the physics world, but no representation in the visual world; while individual particles in a particle effect often exist only in the visual world, playing no part in physics, AI or sound. In reality, sound would bounce off the particles of dust from an explosion just as any other surface, but in our simulations the dust just doesn’t exist from the sound’s point of view.

Why do we do this? Performance and simplicity, mostly. By setting up different ‘worlds’ for the different processes in the game, we can present those processes with data that is more suited to them. We could present the sculpture to the physics system in all its finely-detailed glory, but computing the collisions of such a complex object is a lot harder than computing collisions for a simple cube – and, if we’re honest, treating the sculpture as if it were a cube produces physical interactions that look good enough. By designing our different worlds carefully, we can simplify some of the problems that we face, turning intractable problems into tractable ones.

Take AI movement, for example. Imagine that you’re trying to write AI behaviour that’s based in the physics world. It’s logically possible – you can work out, from the physical representations of objects, which places your character can and can not be, and thus you can find a path to a target that accounts for obstacles like walls… but it would be very, very slow.

So instead, the AI world is very simple: it’s just a flat, connected mesh of triangles – a “navigation mesh” – coupled with a law that says that all AI agents must be located on the surface of one of the triangles at any time. There are no walls in the AI world – only the edges of the world itself. And now that we only need to obey the law about staying on the mesh, pathfinding and movement become a lot simpler; movement is effectively 2D within the surface of a single triangle, with a little more complexity at triangle borders. (There are more complications in practice, of course, like dynamic obstacle avoidance, and characters that jump, but they become special cases – the vast majority of agent movement is simplified).

The AI world - basic navigation mesh, perhaps with a couple of "cover points" marked for agents to use

How is it that doing AI computations in a world without walls is able to correctly move characters in a world with walls? Because we, as the architects of these worlds, carefully designed them to overlap in particular ways. The AI world’s navigation mesh, for example, does not have any triangles that would overlap with the Physics world’s collision mesh. And the result? AI agents are prohibited – by their own world’s laws – from being in any position that would see them stuck in a wall, because such a position would not be on the surface of the AI world’s navigation mesh.

They appear to be exercising collision detection in the physics world – but in fact, their inability to walk through walls is almost coincidental.

When the worlds don’t overlap in the right way, or when one world lacks the laws required for a particular object to exist in it, we can get some odd results. For example, it’s common that the audio world in a game is a simple, infinite void, populated by a collection of point-based sound emitters and, somewhere, a ‘listener.’ While this is sufficient to model some of the behaviour we expect from sound – for example, as I walk closer to a sound source, the sound gets louder – it’s also lacking the ability to model sound occlusion. If I’m standing 20 metres from an explosion, it’ll sound the same regardless of whether those 20 metres are occupied by air or by concrete. (I understand that the Call of Duty games shipped with an implementation of sound occlusion, but beyond that I don’t see many people talking about it. Anyone know how their implementation works?)

Oddly, players are used to a certain amount of discrepancy between worlds – they don’t expect particles to get in the way of bullets, or for the player character to bob up and down while walking over a pile of gibs. But many other situations are considered problematic. For example, walking against a wall that results in the player’s mesh intersecting the wall, because the physics representation is too tight relative to the visual representation; or a protruding bit of floor that the player appears to be able to stand on but which they actually fall straight through because it’s absent from the physics world.

Ensuring that these worlds are correlated in a way that makes sense to the player is an important task. Much of the time I spent working in QA was dealing with failures of this type.

So, given that managing that correlation is important, how can we do it most effectively?

At the most basic level we can simply test the game as a whole, exploring the superposition of all universes and their corresponding processes. This is a natural part of game testing, but it’s slow and error-prone: Walking into every corner of every level to ensure that the player cannot “fall out of the world” is not a very fun task, I assure you.

The problem is that the information we can gather about the different worlds in their “natural” states is frequently limited. We can see several hundred meters ahead of us in the visual world, but we can only touch the objects immediately next to us in the physics world. So we can inspect the visuals of the game world en masse, but inspecting the physics requires touching everything in turn, kicking every rock to check whether it kicks back. It’s even worse with things like the AI world – you’d have to coax and push agents around the world to check that they don’t suddenly behave strangely in one place.

So the first obvious solution is to transpose one world into another: to visualise worlds that aren’t usually visual. Many game engines do this: rendering all the physics polygons, or the AI’s navigation mesh, as a translucent overlay over the regular visual world. Different colours are used to indicate different properties, such as “sticky” or “only traversable by crawling agents.” Transposing the other worlds into the visual world is usually the most convenient way to work, as we can gather a large amount of precise information quickly in that world, but it’s not the only transposition available: we could use audio, too. Imagine making physics surfaces close behind the camera emit a low hum, so you can tell when the camera is backing into something without having to look at it. There are some interesting possibilities here for enabling game development for the visually impaired, too.

The next solution is to try to ensure that all the worlds are being furnished with data from a common source. We can hand-craft collision and navigation meshes, but aside from the risk of human error, this is very prone to resynchronisation – someone changes the render mesh and doesn’t update the collision mesh. Instead, we can seek to generate our collision and navigation meshes from our render meshes. Tools exist for this already: libraries like Recast toolkit is excellent (and free!). Depending on your game’s needs, you’ll likely need to write some custom analysis and generation methods, but existing tech can be used to provide a lot of the infrastructure for that.

The most important thing, though, is to just be very clear about what the different worlds your game takes place in are, and which data and processes are present in each. Ensure that your processes don’t allow anyone to confuse a render mesh with a collision mesh. Ensure that your codebase doesn’t try to perform AI walkability tests to points that are on the render mesh but not on the AI mesh. And so on.

Understanding and embracing the segregation of processes into multiple, cohabiting game worlds can be a very useful tool for breaking down what your game is doing into manageable pieces. Once you’ve got it, you can zero in much more quickly on the causes of problems; and even, when faced with some new and difficult problem, spin off an entirely new world to contain, reduce, and resolve it.