Like all middleware company’s we provide support.  The majority of our support is generally answering questions revolving around ‘how would you recommend we do X’.  One popular question is something a lot of us face fairly often:  Should we allocate for the worst case, the best or dynamically allocate to fit all situations?

This issue was also raised by Alex Darby in ‘How I discovered my favourite templated list class’ the other week.  Alex was extolling the virtues of a fixed size list class over generic types found in libraries like STL.

An example

Your program needs to parse some data and allocate structures that are stored in an XML file. You don’t know how many structures are contained in the XML file so you have 3 options.

1. Parse the file once to find the size required and then do it again to store the data for an exact fit. (Exact fit)
2. Parse the file once and call new when you need it or place it into a growable container like a stl::vector. (Dynamic fit)
3. Create a fixed size buffer, pool, list etc, read the file once and only cope for the worst case. (One Size Fits All)

This leads to a nice table of pros and cons.  These are potential problems by the way so may not always be a problem.

Exact Dynamic OSFA
Difficulty  High  Med to Low  Med to Low
Maintenance  High  Low  Low
Implementation time  High  Low  Low
Fragmentation  Low  High  Low
Loading time  High  Med to Low  Med to Low
CPU time  Low  Med to High  Low
Wasted memory  Very Low  Low to Med  Med to High
Potential bugs  High  Med  Low

Exact fit

This looks pretty good on paper.  Zero fragmentation and lowest memory consumption.  Loading and CPU time are the biggest problems.  As long as you really don’t re-read the file from disk it’s unlikely to be that noticeable at load time.  There are two further serious downsides to this option that make it a no-no for me.

The first is that you have to create and manage more code.  This means more work, maintenance and bugs.

The second is that you have no idea where you stand in terms of memory consumed. Three days before master a designer could come along, add a few options and cause one level to run out of memory. You would need to put asserts and warnings in to catch these issues early.  Failing to do this will lead to wasted time for others while they track down the cause of the extra memory use.  If you are putting in these warnings then you have already decided on an upper limit so you may as well go with OSFA.

Dynamic fit

Fragmentation and peak memory rules out number two as a viable option in my opinion.  A stl::vector expansion requires a minimum of 2xN + 1.  So 1 million 16byte elements requires 32MB + 1 of space when it expands.  This then frees 16MB and that can lead to a large amount of fragmentation.

Using new and delete also increases the amount of CPU time required.  During load this isn’t really an issue, even a slow allocator is unlikely to become a bottleneck.  During gameplay this may have a noticeable effect.

Performance wise your memory may be all over the system.  This can thrash your cache and really damage performance.

One Size Fits All

There is a slight difficulty in working out upper limits with this method.  The serious problem is the wasted memory.  Upper limits can often be taken from the design document.  Maintenance wise all you need to monitor is the upper limit and adjust accordingly.

In order to really take advantage you must also be able to allocate the total space needed in one block.  This often means some form of custom allocation method as well to redirect the allocations.  Without one it makes it much harder to track the consumed memory.  It also means that all your data is located close to each other in memory.  This can have significant benefits for the performance of your game.

If the budget is exceeded an assert or warning should be fired alerting the person making the changes that something has happened.  This stops bugs being looked into by several people and keeps things ticking over.  The downside is that someone may need to increase the budget, which may require a recompile and waste some time.

One option is to create a warning and continue to function with allocations being directed to the main heap. You need to ensure that these warnings are reported, fixed and not brushed aside or worse, disabled.

What about the wasted memory?

Some memory does get wasted; hopefully none in the worst case but in the best case quite a lot could be wasted.  As Alex mentioned, a common counter argument is that you could use the wasted memory for something else.  You can, but what if later on something needs to be added that you no longer have memory for?  You are back to finding enough memory for your data while someone using OSFA is still working away without a problem.

Summary

We nearly always recommend the One Size Fits All method.  There are times when it’s not appropriate but they are a case by case situation.  It helps with budgeting and keeps bugs to a minimum.