Just a quick one this fortnight, I’ve not got much time to spare…

The big, unannounced project I’m working on is reaching a point where most of the first-pass features are in, and now we need the game to run at an acceptable speed. It turns out that trying to render 8 million normal-mapped triangles 60 times a second on a GeForce 8800 is asking a little much of the poor thing. Who knew?

Rendering 8 million triangles at 60Hz on a GeForce 8800 requires 1.21 jiggawatts of power.

So, a large chunk of the past couple of weeks has been spent getting various performance-improving systems in place: a level-of-detail system, some simple area-based culling, and so on. Unity’s built-in profiling tools range from “rudimentary” to “undocumented,” so I’ve supplemented them with some of my own – the most useful of which, so far, has been a simple tool that generates a list of all the meshes presently being rendered in the scene, ordered by their total contribution to the polygon count. Very helpful for telling us where to focus our art optimization efforts.

One of the things my polygon-counter tool doesn’t do is generate percentages.

It would be easy to add, but I consciously decided against it, because it’s very easy to be misled by them. I’d have thought that most people in game development know how percentages work, but it seems to happen all the time when people look over my shoulder at the profiler graph: “Hey, physics is using 30% of your frame time – it was only using 10% when I looked a few hours ago. Did you break something?”

No, I optimized the renderer, dropping the total frame time from 20ms to 6.6ms. The physics is taking 2ms just like it was before. Kthx.

If you’re focusing your optimization on decreasing the percentage that a particular component takes up, then you can always achieve that by making everything else slower… whenever you’re dealing with a percentage, you need to remember to ask: A percentage of what? And whenever you’re comparing two percentages, you need to remember to ask: Are they percentages of the same thing?

Frames-per-second values are similarly deceptive, because the same piece of work will have a different effect on the number at different points. (Also, shouldn’t they be measured in Hertz (Hz)? Physics, people!)

Say I’m running at 30FPS in a development build, and I’ve written a new feature. I turn it on, and the framerate drops to 28FPS – that’s not so bad, right? Only 2 frames per second dropped? Why, that’s only 6% of the original framerate…

Later on in the project, though, when the art’s been optimized and we’re running at more like 80FPS, I turn the feature on again – and this time the framerate drops from 80FPS to 67FPS. Whoa! 13FPS drop! That’s more than six times what it was before. Is the code slower?

The code’s taking exactly the same amount of time (2.38ms per frame). Most processes in a game will take the same amount of time regardless of what your framerate is; what determines their time is how complex they are and how much data they’ve got to process, not how fast the overall system is. And because the FPS figure is the reciprocal of the frame time (i.e. it’s 1/{frame time in seconds}), adding or removing milliseconds at a low FPS will produce a much smaller change than adding or removing milliseconds at a high FPS.

If you want your game to run at a solid framerate of X frames per second, then you’ve got 1000/X seconds to burn each frame. Here’s a handy table for you:

No, wait, sorry, my bad. (Seriously, Google Images? That’s the handiest table you could give me?)

Desired framerate Milliseconds to burn Hz at 1ms under Hz at 1ms over
30Hz 33.3ms 31Hz 29Hz
50Hz 20ms 52Hz 47Hz
60Hz 16.6ms 64Hz 57Hz
100Hz 10ms 111Hz 90Hz

See how adding or removing a single millisecond to the per-frame workload makes a much bigger difference to a 100Hz framerate than to a 30Hz framerate?

Now, pick a target framerate, write down how many milliseconds per frame you have accordingly, and then forget about the framerate. Focus on concrete time. The framerate lies to you, but concrete’s always been supportive.

This is, of course, all quite apart from the fact that in modern multicore games, it may not even make sense to be talking about one single “framerate.” You’ve got to start separating it out into render frames, physics frames, input frames…

So, anyway, those are two numbers that deceive you: profiler percentages and framerates. Anyone got other frequently-misinterpreted, pathologically misleading, or just plain wrong numbers that they’d like to share with the class?