Comments on: Binary shaders — not that big of a deal Modern OpenGL supports compiled shaders with GL_ARB_get_program_binary (<em>compile them once -when you first run the game- on the client machine and then further loadings are going to be faster</em>). It doesn't support binary, pre-compiled (i.e. machine independent) shaders. And yes, parsing takes more time than loading binary data. But GLSL is quite a simple language and I'm not sure if this difference is significant (given that driver needs AST anyway). And compilation (basically optimization) *could* be done offline, with both GLSL input and output. Something like #pragma already_optimized would switch driver to non-optimizing (faster) mode. In such manner you would get all benefits of binary shaders (faster compilation, reliable optimizations, maybe source code "protection") with no need of maintaining second (binary) shading language. I'm sure this could be addressed sometime in the future, but ATM OpenGL/GLSL still evolves rapidly (for example separate shader objects are big leap forward) and I believe that there is more important work to do (DSA anyone?). Modern OpenGL supports compiled shaders with GL_ARB_get_program_binary (compile them once -when you first run the game- on the client machine and then further loadings are going to be faster). It doesn’t support binary, pre-compiled (i.e. machine independent) shaders.

And yes, parsing takes more time than loading binary data. But GLSL is quite a simple language and I’m not sure if this difference is significant (given that driver needs AST anyway). And compilation (basically optimization) *could* be done offline, with both GLSL input and output. Something like #pragma already_optimized would switch driver to non-optimizing (faster) mode. In such manner you would get all benefits of binary shaders (faster compilation, reliable optimizations, maybe source code “protection”) with no need of maintaining second (binary) shading language.

I’m sure this could be addressed sometime in the future, but ATM OpenGL/GLSL still evolves rapidly (for example separate shader objects are big leap forward) and I believe that there is more important work to do (DSA anyone?).

]]>
By: julien/2011/05/02/binary-shaders-not-that-big-of-a-deal/#comment-3503 julien Tue, 03 May 2011 11:43:54 +0000 I've been trying to convince people of this for a long time now. You may gain a bit of time for the initial parse step, but the majority of your compile time is spent on the back end, in the driver. And consider the architectures where context state has to be baked into the shader - you can't actually do the full compile until the draw is issued. Though the programmer should never have to pre-optimize GLSL. That's just sort of embarrassing for the runtime/driver. I’ve been trying to convince people of this for a long time now. You may gain a bit of time for the initial parse step, but the majority of your compile time is spent on the back end, in the driver. And consider the architectures where context state has to be baked into the shader – you can’t actually do the full compile until the draw is issued.

Though the programmer should never have to pre-optimize GLSL. That’s just sort of embarrassing for the runtime/driver.

]]>