Comments on: Dear Microsoft (and other culprits)… It would be nice to see what should take the place of these old features after they're removed. As much as I'd like all that to happen, I still don't see how realistically bringing CPU and GPU so close together could achieve something without serious performance problems. The lockable resources are already helping a lot. What I'd like to see instead is simply more programmability. Blending, clipping, bending, chipping, shading, tiling - I'll be smiling. :D It would be nice to see what should take the place of these old features after they’re removed. As much as I’d like all that to happen, I still don’t see how realistically bringing CPU and GPU so close together could achieve something without serious performance problems. The lockable resources are already helping a lot.

What I’d like to see instead is simply more programmability. Blending, clipping, bending, chipping, shading, tiling – I’ll be smiling. :D

]]>
By: Darren/2011/06/23/dear-microsoft-and-other-culprits/#comment-6241 Darren Thu, 23 Jun 2011 13:21:03 +0000 Hi Darren - I am glad to hear you are interested in learning more about graphics programming and procedural generation. Here is my advice: 1) It is not easy. There is no easy way about it, unfortunately. This does not mean it is impossible - far from it. In fact, as long as you have a bit of patience it just takes some time. 2) If you just want to make games, you can probably get by using middleware like Unity or the Unreal Development Kit initially. 3) If you are really interested in learning graphics programming, then you can do one of two paths: write your own software-renderer or use an API like DirectX or OpenGL. Depending on what you want to do, their are advantages to both. If you write a software renderer, it is probably going to be slow (especially as a beginner) - but it will also help give you a much better understanding of why and how things work. (I started learning by writing a software renderer, then scrapped it and rebuilt it as best I could on hardware). 4) Writing a software renderer is pretty easy -- at least getting started. All you need to be able to do is draw pixels to the screen. The fastest way to do this in Windows is to draw to a chunk of memory then blit it to the screen, using GDI or lock a surface in DirectX then draw it to the screen (although depending on your version of DirectX, there may be complications aligning the texture to the pixels on the screen). 5) It is the most mentally rewarding thing you will ever do. :) However, it might also be the most frustrating thing you ever do. 6) You will make a lot of mistakes, and probably scrap your engine several times. This is natural. 7) I started by reading books, but I think it is the slowest way to learn. Just dive in and start coding. 8) Google is your best friend. If you want to know how to do something - just Google it. If you get a strange error, google it. 9) Don't worry too much about performance at first, since you will probably scrap your first couple of engines. Just get it working. 10) As far as procedural generation goes, the most rewarding thing you can do is invent your own algorithms. However, there are definitely existing algorithms you will find useful: Perlin/Simplex noise, Marching Cubes, L-Systems, Voronoi diagrams, Delaunay triangulations, convex hulls, etc... 11) Definitely try to wrap your head around matrix multiplication at some point. You will use it for many things. Equally useful: interpolation (linear, cubic, etc) and Bezier curves. 12) In short, just get in there and start coding stuff - the answers will reveal themselves. Deriving the answers on your own will make you a much better programmer than reading them off some site or book :) Hi Darren – I am glad to hear you are interested in learning more about graphics programming and procedural generation. Here is my advice:

1) It is not easy. There is no easy way about it, unfortunately. This does not mean it is impossible – far from it. In fact, as long as you have a bit of patience it just takes some time.

2) If you just want to make games, you can probably get by using middleware like Unity or the Unreal Development Kit initially.

3) If you are really interested in learning graphics programming, then you can do one of two paths: write your own software-renderer or use an API like DirectX or OpenGL. Depending on what you want to do, their are advantages to both. If you write a software renderer, it is probably going to be slow (especially as a beginner) – but it will also help give you a much better understanding of why and how things work. (I started learning by writing a software renderer, then scrapped it and rebuilt it as best I could on hardware).

4) Writing a software renderer is pretty easy — at least getting started. All you need to be able to do is draw pixels to the screen. The fastest way to do this in Windows is to draw to a chunk of memory then blit it to the screen, using GDI or lock a surface in DirectX then draw it to the screen (although depending on your version of DirectX, there may be complications aligning the texture to the pixels on the screen).

5) It is the most mentally rewarding thing you will ever do. :) However, it might also be the most frustrating thing you ever do.

6) You will make a lot of mistakes, and probably scrap your engine several times. This is natural.

7) I started by reading books, but I think it is the slowest way to learn. Just dive in and start coding.

8) Google is your best friend. If you want to know how to do something – just Google it. If you get a strange error, google it.

9) Don’t worry too much about performance at first, since you will probably scrap your first couple of engines. Just get it working.

10) As far as procedural generation goes, the most rewarding thing you can do is invent your own algorithms. However, there are definitely existing algorithms you will find useful: Perlin/Simplex noise, Marching Cubes, L-Systems, Voronoi diagrams, Delaunay triangulations, convex hulls, etc…

11) Definitely try to wrap your head around matrix multiplication at some point. You will use it for many things. Equally useful: interpolation (linear, cubic, etc) and Bezier curves.

12) In short, just get in there and start coding stuff – the answers will reveal themselves. Deriving the answers on your own will make you a much better programmer than reading them off some site or book :)

]]>
By: Keith Judge/2011/06/23/dear-microsoft-and-other-culprits/#comment-6237 Keith Judge Thu, 23 Jun 2011 10:29:07 +0000 Thanks Keith - I agree with you as well :) As you say, I do not think there are any real advantages to using a WARP device (besides compatibility). What I am advocating is doing away with APIs and discrete GPUs. This has a number of advantages, in my view, in terms of making things simpler: 1) You don't have to manage variables between the GPU (in shader code) and CPU. 2) You dont have to specially allocate and deallocate resources on the GPU. 3) What you see is what you get - programming is just as simple as writing code for the CPU, albeit with the potential complications of handling threading properly. 4) You don't have to make any special API calls to set up the GPU state. 5) You are not constrained by GPU memory limits (well, at least more system memory is likely available). 6) You do not have to lock and unlock resources, and there is no CPU/GPU bottleneck to worry about. 7) You can potentially handle more complex and dynamic languages on the CPU-side. 8) You would likely not need various levels of hardware since everything is technically programmable. 9) There are others you could probably think up, my brain is fried right now :).... Obviously, flexibility comes at the cost of performance, but I don't think thats always a bad thing. The constraints of mobile devices have actually caused people to develop more innovative games, and do so more rapidly, IMHO. Thanks Keith – I agree with you as well :)

As you say, I do not think there are any real advantages to using a WARP device (besides compatibility). What I am advocating is doing away with APIs and discrete GPUs. This has a number of advantages, in my view, in terms of making things simpler:

1) You don’t have to manage variables between the GPU (in shader code) and CPU.
2) You dont have to specially allocate and deallocate resources on the GPU.
3) What you see is what you get – programming is just as simple as writing code for the CPU, albeit with the potential complications of handling threading properly.
4) You don’t have to make any special API calls to set up the GPU state.
5) You are not constrained by GPU memory limits (well, at least more system memory is likely available).
6) You do not have to lock and unlock resources, and there is no CPU/GPU bottleneck to worry about.
7) You can potentially handle more complex and dynamic languages on the CPU-side.
8) You would likely not need various levels of hardware since everything is technically programmable.
9) There are others you could probably think up, my brain is fried right now :)….

Obviously, flexibility comes at the cost of performance, but I don’t think thats always a bad thing. The constraints of mobile devices have actually caused people to develop more innovative games, and do so more rapidly, IMHO.

]]>
By: Darren/2011/06/23/dear-microsoft-and-other-culprits/#comment-6235 Darren Thu, 23 Jun 2011 10:07:25 +0000 I part agree and part disagree with your post. You can create CPU based rasterisers in DX10/11 using the WARP device type. I believe there are several examples that work quite well. However, I don't see why a CPU based renderer is preferable to using a GPU is there's one available. You say that it would be a simpler API, but I don't entirely see how that would be the case. As for texture formats, I agree they need to change, but not with your solution. Instead I'd prefer a flexible texture format, where instead of a fixed list of formats, you can specify the number of channels (as you say), but choose 8, 16 or 32 bits on a per channel basis and choose normalised, signed, sRGB, etc (also per channel). This would then be similar to the flexible vertex format system and could somewhat unify those APIs. It would make dealing with g-buffers a lot easier. There are other niggly bits of DX11 that need to change, though rather than just fixing those, I hope that DX12 is revolutionary (and comes along with a new console). It's just a bit annoying that DX development all seems to happen behind closed doors, though I imagine they're looking closely at a few of the recent interesting OpenGL extensions. I part agree and part disagree with your post.

You can create CPU based rasterisers in DX10/11 using the WARP device type. I believe there are several examples that work quite well. However, I don’t see why a CPU based renderer is preferable to using a GPU is there’s one available. You say that it would be a simpler API, but I don’t entirely see how that would be the case.

As for texture formats, I agree they need to change, but not with your solution. Instead I’d prefer a flexible texture format, where instead of a fixed list of formats, you can specify the number of channels (as you say), but choose 8, 16 or 32 bits on a per channel basis and choose normalised, signed, sRGB, etc (also per channel). This would then be similar to the flexible vertex format system and could somewhat unify those APIs. It would make dealing with g-buffers a lot easier.

There are other niggly bits of DX11 that need to change, though rather than just fixing those, I hope that DX12 is revolutionary (and comes along with a new console). It’s just a bit annoying that DX development all seems to happen behind closed doors, though I imagine they’re looking closely at a few of the recent interesting OpenGL extensions.

]]>