Comments on: The Root Of All Evil A lot of the speed comes up from using efficient data. If you wait until the end of the project to check performance its very likely that changing data is even more a nightmare at this point. If your game is massively level/content based you ought to check the work of content creators during production. If you spot particularly inefficient data early then you can decide on the best plan of action (which often ends up postponing optimization, but its good to actively take that decision and understand whats going on with the data set). A lot of the speed comes up from using efficient data. If you wait until the end of the project to check performance its very likely that changing data is even more a nightmare at this point. If your game is massively level/content based you ought to check the work of content creators during production. If you spot particularly inefficient data early then you can decide on the best plan of action (which often ends up postponing optimization, but its good to actively take that decision and understand whats going on with the data set).

]]>
By: Robin/2011/03/14/the-root-of-all-evil/#comment-1646 Robin Thu, 17 Mar 2011 01:44:03 +0000 "...or studio reduction" SO TRUE “…or studio reduction”

SO TRUE

]]>
By: Tony Albrecht/2011/03/14/the-root-of-all-evil/#comment-1632 Tony Albrecht Wed, 16 Mar 2011 05:14:11 +0000 Much <3 for Tony, but we both love the performance argument ;) My contribution to your awesome blog. Psychoanalysis questions for the balanced engine programmer: * Have you ever had a project that just before shipping had a horrible frame rate and it was a nightmare to fix? * Did you find that no other programmers had any clue how to fix it? * Did you find that you had to go into their "self indulgent' often "horribly architected" code and try to reorganize it? * .. did the programmer in question go home because there was nothing they could contribute? * Was the problem so bad that all the data also had to change? * Did you have to give up and resolve to fix it on the next project? * Were you horribly scared by that experience? * Did it happen multiple times? * Do you now think that most programmers have no idea how the compiler/architecture works? ... and, er ... * Do you now think performance is the most important virtue of code? Much <3 for Tony, but we both love the performance argument ;) My contribution to your awesome blog.

Psychoanalysis questions for the balanced engine programmer:

* Have you ever had a project that just before shipping had a horrible frame rate and it was a nightmare to fix?

* Did you find that no other programmers had any clue how to fix it?
* Did you find that you had to go into their “self indulgent’ often “horribly architected” code and try to reorganize it?
* .. did the programmer in question go home because there was nothing they could contribute?
* Was the problem so bad that all the data also had to change?
* Did you have to give up and resolve to fix it on the next project?
* Were you horribly scared by that experience?
* Did it happen multiple times?
* Do you now think that most programmers have no idea how the compiler/architecture works?

… and, er …

* Do you now think performance is the most important virtue of code?

]]>
By: Daniel/2011/03/14/the-root-of-all-evil/#comment-1621 Daniel Tue, 15 Mar 2011 18:09:40 +0000 I like to reframe this as "when is optimization cost-effective?". Well, duh, when the benefits are greater than the costs, and both benefits and costs have to adjusted for uncertainty. Your emphasis on measurement is spot on... it reduces uncertainty as to the benefits. Optimization is cost-effective when the benefits are large. We often know during the design phase where the performance bottlenecks are going to be, and therefore what has to be optimized. Optimization is also cost-effective when the costs are low. Here I'm talking about craftsmanship and professional pride... all the little things that don't really cost any more to do right the first time, but which aren't worth going back and fixing later. Individually, the benefits are small, but they add up to an application that runs twice as fast. I like to reframe this as “when is optimization cost-effective?”. Well, duh, when the benefits are greater than the costs, and both benefits and costs have to adjusted for uncertainty. Your emphasis on measurement is spot on… it reduces uncertainty as to the benefits.

Optimization is cost-effective when the benefits are large. We often know during the design phase where the performance bottlenecks are going to be, and therefore what has to be optimized.

Optimization is also cost-effective when the costs are low. Here I’m talking about craftsmanship and professional pride… all the little things that don’t really cost any more to do right the first time, but which aren’t worth going back and fixing later. Individually, the benefits are small, but they add up to an application that runs twice as fast.

]]>
By: TomF/2011/03/14/the-root-of-all-evil/#comment-1602 TomF Tue, 15 Mar 2011 03:38:27 +0000 Performance is no excuse for bad memory management. Fast code doesn't have to be unmaintainable - keeping an unoptimised, easily readable version of an optimised function nearby can make it easier to make and verify changes to both sets of code. Additionally, you can generally get a serious performance boost by just optimising the data layout on modern consoles - its rarely the code itself which is the bottleneck. So yeah, its a balancing act. You want to get the features in so you don't want to waste time optimising where it's not necessary, but you also don't want to get caught out at the end of the project with code that you can't optimise due to bad decisions made earlier in the title's development. Performance is no excuse for bad memory management.

Fast code doesn’t have to be unmaintainable – keeping an unoptimised, easily readable version of an optimised function nearby can make it easier to make and verify changes to both sets of code. Additionally, you can generally get a serious performance boost by just optimising the data layout on modern consoles – its rarely the code itself which is the bottleneck.

So yeah, its a balancing act. You want to get the features in so you don’t want to waste time optimising where it’s not necessary, but you also don’t want to get caught out at the end of the project with code that you can’t optimise due to bad decisions made earlier in the title’s development.

]]>
By: jonas/2011/03/14/the-root-of-all-evil/#comment-1589 jonas Mon, 14 Mar 2011 17:50:30 +0000

]]>
By: Kent Quirk/2011/03/14/the-root-of-all-evil/#comment-1585 Kent Quirk Mon, 14 Mar 2011 15:29:48 +0000 keeping frame rate up throughout development is so important and often overlooked. It might be possible to claw back 2-3 fps here and there, but trying to find 10? thats 330ms of processing you need to find! Keep your frame rate, and keep your sanity through beta :) keeping frame rate up throughout development is so important and often overlooked. It might be possible to claw back 2-3 fps here and there, but trying to find 10? thats 330ms of processing you need to find! Keep your frame rate, and keep your sanity through beta :)

]]>
By: Richard Fine/2011/03/14/the-root-of-all-evil/#comment-1580 Richard Fine Mon, 14 Mar 2011 14:21:15 +0000