How to calculate the blue channel for normal map?

In our new pipeline we need to merge our texture into two. For our normal map we have only space to use two channels and we need to calculate the blue channel. I wonder - do any of you guys have the magic formula?
This kind of math is way out of my league, but our programmers seems to be stuck. I’m using Shader FX to visually the results.
This is how far we come:

z = square(1- ((2x-1)^2+(2y-1)^2) )

This formula have given us the closest result, but the new blue channel (comparing to the original in the texture) have a higher contrast (the white is white, but the gray is a bit darker). We tried to gamma correct the image (thinking maybe Photoshop where messing things up) but it got worse.

The problem is when we use the new, calculated, blue channel (using the formula above) with this:

x^2+y^2+z^2=1

The added result do not become fully white (some pixels of the image do not add up to white). We think we might have some problems with some normal maps not being correctly normalized. Our artist using all kinds of programs to generate their normal maps. How can we solve this?

It sounds like you may have gotten the use of the normals and the storage of the normals mixed together. Although the shader will remap the texels in your map from 0-1 RGB values to -1 > 1 vector components, that’s a matter for interpretation; when actually reading the textures your values are just 0-1. So you probably don’t want to use 2x -1 / 2y-1 to calculate the z, your want to calculate your normalized pixel from the 0-1 values in the raw texture.

If you use pre-normalized maps:

  1. The normal maps need to be stored in a linear format without gamma correction.
  2. The original normals were normalized before the blue component was removed

If both of those are true, B is just sqrt(1 - (sqrt® + sqrt(G))). So, for example, a pixel with a straight up normal would be (.5, .5, .707) in the original, then stored as (.5, .5) and reconstructed as sqrt (1 - (.25 + .25) ) or sqrt (.5) which is .707

If your normal maps are NOT prenormalized, you won’t be able to reconstruct the blue channel – you’d have to store it explicitly some how.

Thanks for the replay Theodox! Noob question (?): how do you change the gamma settings from Photshop. We use Nvidia’s dds plugin. I looked around, and maybe I’m blind, but I don’t find a simple way. Also, is there a simple way to make sure the normal maps are normalized?

EDIT:
When thinking about it, I think it would be better to gamma correct in the shader? Even if you would be able to save in linear space from PS, they till export normal maps from zBrush, xNormals or what ever. They would still gamma correct. What are your pipeline for exporting/importing normal maps, when it comes to gamma correction and normalizing?

I m not sure if it is related, but I change photoshop’s color profile to Monitor Color. I find that using the default messes with the display of the individual channels.
For example, create a RGB texture and fill it with 50% gray, then open the Channels Panel. With Monitor Color Enabled, the RGB, R, G, and B will all look the same. With the default Color Profile enabled, the individual channels will appear brighter than the RGB image. This really caused me headaches when I was authoring Du/Dv maps for an Indirect Shader.

Unfortunately it’s not always clear who is in charge of gamma settings in the texture pipe. Sometimes people use SRGB and then de-gamma the texture in the shader; other people write out a de-gammaed texture with some exporter step.

Here’s a good reference on the overall problem:

The key thing is to get everybody - artists, tools programmers, and graphics people - to agree on where the borders of linear space will be and the enforce that - the errors that come from accidentally mixing gamma and linear values are often hard to debug. FWIW normal maps are an edge case, since the shader is going to interpret the numbers as vectors rather than RGB colors. Most normal generation tools ignore gamma and expect the shader to do the same.

Wow, thanks for the link - was an awesome read. Thanks, both of you! This is definitely a lot bigger problem than I expected. I will try to work out a pipeline for our project with the programmers. If you have some other tips, please let me know.

Hi!
After some reevaluating this is the code we came up with:

z = ( square(1- ((2x-1)^2+(2y-1)^2) ) ) / 2 + 0.5

The extra steps, you have to when you where loading the texture into a node based shader network. Ehh, honestly I really don’t understand. But this works if the normalmaps is normalized.

Here are some more literature for you who have a deeper knowledge:

hi, this is an old topic but very relevant for many people and me
for me the easiest variant:
z = sqrt ( 1 - saturate ( xx + yy ) )
one extra instruction (saturate) but exclude negative value in sqrt

hi, what does the saturate function even do? I have no context for this. I’m working in blender. thanks.

saturate is clamp in the range of 0-1, but cheaper