Your game is running too slow – what should you do? Should you wait until all features are in place and then throw your smarty pants engine coders at it then? Should you dedicate resources to optimising the game now? Or should you have considered performance much earlier and risked the Wrath of Knuth?
When writing a game you have a number of responsibilities to meet as a coder. Your code must be
- Correct,
- performant, and
- maintainable.
Correct code is code that does what it is supposed to do. Most programmers are aware of this constraint and it is their primary (sometimes solitary) goal – it’s also the most easily verified. Possible symptoms are crashing, disembodied limbs and an infinitely slow framerate.
Performant code is a necessity for console game programming. Console code should be performant with respect to memory and execution time. Consoles have a fixed amount of memory and cycles available and code which ignores those constraints is, at best, not finished yet. A game will not function on console if it uses too much memory, thereby making it void responsibility number 1. Overly slow code reduces the quality of the game – it is still correct yet there is less for a designer to work with, less that an artist can display.
Code must also be maintainable. This doesn’t necessarily mean that you should produce code that will be used by thousands for generations to come, but that the code that you have written is understandable and modifiable for another programmer (or even yourself – I still remember coming back to a Space Invaders clone that I wrote in comment free 68000 assembly after a few months away. I cursed myself vehemently over that and learnt that just because you understand it now doesn’t mean that you will 6 months down the track.)
Most coders I know wouldn’t argue with points 1 and 3, but so many resist (or ignore) the generation of performant code with a passion. Some even seem to actively pursue the production of non-performant code. When discussing the performance of someone’s code, the quote that is almost guaranteed to rear its 30-odd year old head is
“Premature optimization is the root of all evil”
- Knuth, Computing Surveys, Vol. 6, No.4, December 1974.
You can read it here in its original format. Skip to page 268 if you don’t want to read it all – but you should at least read the paragraphs around it to see the context it’s used in. He’s not saying don’t optimise, he’s saying make sure you optimise the right stuff. Far be it from me to disagree with Dr. Knuth – I fully agree with him. Premature optimisation can be bad. Just like premature ejaculation, premature optimisation can leave you with a sticky mess that you’re just going to have to clean up later.
The big question is when is it too soon to optimise code?
Quite simply, it is premature to optimise code before you know what it does.
It is not premature to consider performance when designing your code. You should have an idea of how much time your code will take to execute – you should at least know if it will be a potential bottleneck or not. In the cases where it is likely to be a bottleneck you most definitely should consider its performance during the design phase – in fact it should be a key influence on the design.
It is however, premature to optimise purely on random program sample hits or obviously inefficient assembly without considering how that code is used.
I love optimising code. I love the quantifiable results – seeing the number of milliseconds spent in a function drop by an order of magnitude, seeing the frame rate climb back into double figures, seeing god awful code morph into something simple, efficient, neat. I optimise for a living now – my children’s education, clothing and video games depend on me making someone else’s code run fast – and yet I still feel the pull of premature optimisation. I see a LHS penalty and want to fix it immediately, even though it’s in code that is only called twice a frame. I see a linked list and I want to beat it with a stick until it’s a nice sensible flat array. And the less we mention of scene trees the better.
To effectively optimise you need to be able to see the big picture. You need grok the higher level flow of code and data – what happens, how it happens, how long it takes to happen, what data is used and how that data is laid out. Optimisations at the high level can get you big wins with minimal code/data changes – but the big investment is the time you spend understanding that code in the first place. You need to own it – only then can you effectively optimise it.
As bad as premature optimisation is, last ditch optimisation is worse. Three months before shipping is too late to optimise, you need to do it much, much earlier. Last ditch optimisation is dangerous – the code towards the end of development is as complex as it can get. Even small changes can have far reaching effects and so the larger changes required to make dramatic improvements in performance are often too risky at that stage of development. Once you’ve fixed the obvious bottlenecks you’re left with Uniformly Slow Code. The next stage of optimisation is then asset reduction (optimise textures, meshes, numbers of objects etc), followed by feature reduction. The final stage is studio reduction.