Comments on: The Unix Philosophy and your pipeline Oh so much to say to this post. Almost feel like doing a second blog post to comment on this:)So the unix approach is nice in lots of ways - flexibility, small apps doing one thing etc.One other aspect of this approach that isn't touched on here is the intermediate file support. If you have 5 steps in your pipeline - 5 possible transforms of the data to game-engine-ready then you've got at least 3 temporary file formats between the original data and the game engine ready data, as each part of the pipe writes out it's conclusions, ready to be inhaled by another part of the pipe. Now, these files can be temporary or not (if not, it does actually enable you to re-run only parts of the pipe later, if one part changes. For example, if section 5 of an 8 step pipeline changes - the code, format, whatever - you only have to re-run steps 5 through 8 rather than all steps.) but either way it's a) space consuming and b) time consuming to have to write and then re-read all those files. This is something the monolithic pipeline avoids since all data tends to stay in memory.Something else that comes required with a pipeline such as this is strict adherence to both language usage and versioning requirements for those languages, as well as 3rd party library requirements. I saw a pipeline like this at a movie development house that required 3 different versions of python because different parts of the pipeline used external libraries that required features from each version. It's VERY easy for small parts to start becoming non-congruant in their external dependencies, and close attention needs to be paid to that. Oh so much to say to this post. Almost feel like doing a second blog post to comment on this:)So the unix approach is nice in lots of ways – flexibility, small apps doing one thing etc.One other aspect of this approach that isn’t touched on here is the intermediate file support. If you have 5 steps in your pipeline – 5 possible transforms of the data to game-engine-ready then you’ve got at least 3 temporary file formats between the original data and the game engine ready data, as each part of the pipe writes out it’s conclusions, ready to be inhaled by another part of the pipe. Now, these files can be temporary or not (if not, it does actually enable you to re-run only parts of the pipe later, if one part changes. For example, if section 5 of an 8 step pipeline changes – the code, format, whatever – you only have to re-run steps 5 through 8 rather than all steps.) but either way it’s a) space consuming and b) time consuming to have to write and then re-read all those files. This is something the monolithic pipeline avoids since all data tends to stay in memory.Something else that comes required with a pipeline such as this is strict adherence to both language usage and versioning requirements for those languages, as well as 3rd party library requirements. I saw a pipeline like this at a movie development house that required 3 different versions of python because different parts of the pipeline used external libraries that required features from each version. It’s VERY easy for small parts to start becoming non-congruant in their external dependencies, and close attention needs to be paid to that.

]]>
By: David Hontecillas/2011/02/04/the-unix-philosophy-and-your-pipeline/#comment-417 David Hontecillas Sat, 05 Feb 2011 00:04:06 +0000 One solution to having all the features "at hand" is to build up all your command line tools as dll libraries which you can link from/to. You can then write a command line tool that simply links itself to some common interface (IE a function that takes command line arguments) and as a plus can dynamically load different DLLs and batch process assets on the fly. This way you get the best of both worlds and have your uber editor load in all the functions it need as well. One solution to having all the features “at hand” is to build up all your command line tools as dll libraries which you can link from/to. You can then write a command line tool that simply links itself to some common interface (IE a function that takes command line arguments) and as a plus can dynamically load different DLLs and batch process assets on the fly. This way you get the best of both worlds and have your uber editor load in all the functions it need as well.

]]>
By: Fabrice Lété/2011/02/04/the-unix-philosophy-and-your-pipeline/#comment-415 Fabrice Lété Fri, 04 Feb 2011 16:55:59 +0000 If the editor just triggers your pipeline when you hit export it's not really the MegaEditor I was talking about. I'm not arguing against having a level editor. That is a program that does one thing well.It starts becoming a MegaEditor when all the export code sits inside the editor application. If the editor just triggers your pipeline when you hit export it’s not really the MegaEditor I was talking about. I’m not arguing against having a level editor. That is a program that does one thing well.It starts becoming a MegaEditor when all the export code sits inside the editor application.

]]>
By: Sam Izzo/2011/02/04/the-unix-philosophy-and-your-pipeline/#comment-413 Sam Izzo Fri, 04 Feb 2011 15:25:03 +0000