Comments on: Improved Normal-map Distributions > How is this superior to simply extending the normal to be encoded to a plane at z=1, storing the resultant x,y intersection loc, then re-creating that (x, y, 1) vec and (re)normalizing on decode? This suggestion is actually equivalent to the partial derivative decoding scheme and has the same issues (as you allude to, e.g. loss of ability to store normals pointing to the horizon). > How about reflecting normals that are below the +/-x = z / +/-y = z planes about same and storing that fact with a single bit in each element of the texture so that all of the coding is done in the pyramid “near” 0,0,1 and the range of the method above is sanitized? Yeah, maybe... one thing I'd be a little weary of is the precision distribution of a scheme like this. It seems like we'd be allocating too bits of precision to normals which are close in direction to those x=z, y=z planes than would be ideal. Still, maybe that's what would be desired in some cases. > How is this superior to simply extending the normal to be encoded to a plane at z=1, storing the resultant x,y intersection loc, then re-creating that (x, y, 1) vec and (re)normalizing on decode?

This suggestion is actually equivalent to the partial derivative decoding scheme and has the same issues (as you allude to, e.g. loss of ability to store normals pointing to the horizon).

> How about reflecting normals that are below the +/-x = z / +/-y = z planes about same and storing that fact with a single bit in each element of the texture so that all of the coding is done in the pyramid “near” 0,0,1 and the range of the method above is sanitized?

Yeah, maybe… one thing I’d be a little weary of is the precision distribution of a scheme like this. It seems like we’d be allocating too bits of precision to normals which are close in direction to those x=z, y=z planes than would be ideal. Still, maybe that’s what would be desired in some cases.

]]>
By: graham madarasz/2011/02/24/improved-normal-map-distributions/#comment-2268 graham madarasz Sat, 02 Apr 2011 18:48:28 +0000 Sorry, what I said is not correct, lines in parameter space do not map to great circles in the sphere under a stereographic projection, but under a gnomonic projection. The stereographic projection is somewhere in-between the orthographic and the gnomonic, while orthographic places the projection point at the infinity and gnomonic at the origin, stereographic places it at the pole of the sphere. Interpolation behaves better under the stereographic projection, but it's not ideal either. Anyway, when I get some time, I'll plug these formulas into my compressor and let you know how well they work. Sorry, what I said is not correct, lines in parameter space do not map to great circles in the sphere under a stereographic projection, but under a gnomonic projection. The stereographic projection is somewhere in-between the orthographic and the gnomonic, while orthographic places the projection point at the infinity and gnomonic at the origin, stereographic places it at the pole of the sphere. Interpolation behaves better under the stereographic projection, but it’s not ideal either.

Anyway, when I get some time, I’ll plug these formulas into my compressor and let you know how well they work.

]]>
By: Mark Lee/2011/02/24/improved-normal-map-distributions/#comment-1069 Mark Lee Sat, 26 Feb 2011 21:42:43 +0000 Sorry for the much delayed response Priyamvad. To follow up on your questions, the post was assuming that all normals are in tangent space so that we can get away with only having to encode a hemisphere of normals instead of the full sphere (as you would have to if working in object space for example). Encoding using partial derivatives can be viewed as taking an initial heightfield from which the eventual normal map was derived from, and storing the slope of the surface at each pixel in your normal map with respect to x and y. Conversion between PDs and normals is simple and fast to do. If we have a normal, (nx, ny, nz), and wanted to convert it PDs, we'd store (-nx/nz, -ny/nz) into our map. Reconstructing the original normal can be done by normalize(-dx, -dy, 1), where dx and dy are the stored components. Why would we want to do that? The big advantage is if we have 2 normal maps we want to add together, using such a scheme, we can simply add the derivatives together directly before reconstructing the normal. This is equivalent to adding the heightfields together directly and then deriving a normal, which is essentially what we're trying to do conceptually. What's the disadvantage? If you think about the slope of a line in the 2D Cartesian plane, flat = 0, 45 degrees = 1, and a vertical slope would go to infinity. These correspond with what we'd be storing in our normal maps under the PD scheme. Hopefully it should become apparent that this scheme isn't very good for storing normals which point toward the horizon. As a result, anything beyond 45 degrees from straight up was lost in our implementation, hence the detrimental effect on normal map comment. I mentioned we didn't store our primary normal maps using PDs, we do however store all detail normal maps in PD format to aid correctly compositing them with primary normal maps. For detail normal maps, since they're just adding detail, we're not too concerned about not being able to encode normals near the horizon. Hope this helps. Good luck! Sorry for the much delayed response Priyamvad. To follow up on your questions, the post was assuming that all normals are in tangent space so that we can get away with only having to encode a hemisphere of normals instead of the full sphere (as you would have to if working in object space for example).

Encoding using partial derivatives can be viewed as taking an initial heightfield from which the eventual normal map was derived from, and storing the slope of the surface at each pixel in your normal map with respect to x and y.

Conversion between PDs and normals is simple and fast to do. If we have a normal, (nx, ny, nz), and wanted to convert it PDs, we’d store (-nx/nz, -ny/nz) into our map. Reconstructing the original normal can be done by normalize(-dx, -dy, 1), where dx and dy are the stored components.

Why would we want to do that? The big advantage is if we have 2 normal maps we want to add together, using such a scheme, we can simply add the derivatives together directly before reconstructing the normal. This is equivalent to adding the heightfields together directly and then deriving a normal, which is essentially what we’re trying to do conceptually.

What’s the disadvantage? If you think about the slope of a line in the 2D Cartesian plane, flat = 0, 45 degrees = 1, and a vertical slope would go to infinity. These correspond with what we’d be storing in our normal maps under the PD scheme. Hopefully it should become apparent that this scheme isn’t very good for storing normals which point toward the horizon. As a result, anything beyond 45 degrees from straight up was lost in our implementation, hence the detrimental effect on normal map comment.

I mentioned we didn’t store our primary normal maps using PDs, we do however store all detail normal maps in PD format to aid correctly compositing them with primary normal maps. For detail normal maps, since they’re just adding detail, we’re not too concerned about not being able to encode normals near the horizon.

Hope this helps. Good luck!

]]>
By: castano/2011/02/24/improved-normal-map-distributions/#comment-995 castano Fri, 25 Feb 2011 06:40:56 +0000

It would be good to see how these two projection types do with our test images.

]]>
By: Mark Lee/2011/02/24/improved-normal-map-distributions/#comment-989 Mark Lee Thu, 24 Feb 2011 19:02:06 +0000

]]>
By: Mark Lee/2011/02/24/improved-normal-map-distributions/#comment-988 Mark Lee Thu, 24 Feb 2011 18:52:58 +0000 Hi, great article! Thanks for sharing your experience with us! Can you elaborate a bit more on the 'partial derivative scheme' you guys used before? Do you mean tangent space normal maps? Also what were some of the 'detrimental effects on normal maps' that it would create? It'd help us tremendously to understand these nuances or at least know what to expect. Thanks! Priyamvad Hi, great article! Thanks for sharing your experience with us!
Can you elaborate a bit more on the ‘partial derivative scheme’ you guys used before? Do you mean tangent space normal maps?
Also what were some of the ‘detrimental effects on normal maps’ that it would create?
It’d help us tremendously to understand these nuances or at least know what to expect.

Thanks!
Priyamvad

]]>
By: Patrick/2011/02/24/improved-normal-map-distributions/#comment-985 Patrick Thu, 24 Feb 2011 17:06:32 +0000 Very cool indeed - I'd never considered looking at storing normals as a maths problem rather than a geometry problem, but it doing that seems to open a whole range of interesting avenues. Many thanks for sharing this! Very cool indeed – I’d never considered looking at storing normals as a maths problem rather than a geometry problem, but it doing that seems to open a whole range of interesting avenues. Many thanks for sharing this!

]]>
By: Tweets that mention Improved Normal-map Distributions » #AltDevBlogADay -- Topsy.com/2011/02/24/improved-normal-map-distributions/#comment-960 Tweets that mention Improved Normal-map Distributions » #AltDevBlogADay -- Topsy.com Thu, 24 Feb 2011 08:29:13 +0000 #fb [...]

]]>