1

I have a deferred renderer that I have created. It writes the normal and depth values to a floating point texture. From that I can get a specific fragment's position in view space. But I want to get the pixel's position in world space.
I thought that to get the pixel from VS to WS I would have to multiply it by the camera's inverse world matrix. That doesn't seem to be right though...

The depthMap is the depth texture, the w component is the clipPos.z / clipPos.w. (passed down from the vertex shader as clipPos = gl_Position)

Then in my screen quad shader I do this

    vec2 texCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
    vec2 xy = texCoord * 2.0 - 1.0;
    vec4 vertexPositionProjected = vec4( xy, depthMap.w, 1.0 );
    vec4 vertexPositionVS = projectionInverseMatrix * vertexPositionProjected;
    vertexPositionVS.xyz /= vertexPositionVS.w;
    vertexPositionVS.w = 1.0;
    // This next line I don't think is correct? 
    vec3 worldPosition = (camWorldInv * vec4( vertexPositionVS.xyz, 1.0 )).rgb;

The end goal here is to create a fog algorithm that bases the fog calculation both on the distance away from the camera as well as the fragment's height (in world space).

Mat
  • 961
  • 12
  • 28
  • 1
    This might help you: http://antongerdelan.net/opengl/raycasting.html – bwroga Oct 03 '14 at 12:47
  • Might be you have the same problem I had: http://stackoverflow.com/questions/16246250/reconstructed-world-position-from-depth-is-wrong – Dr Bearhands Oct 09 '14 at 16:30
  • The way you have it should be correct. It is a question what do you have stored in the textures, especially the value of `depthMap.w` needs to be linear, that may not be what you stored there. There would need to be more code for this question to be answered. All in all the general approach seems correct. Note that you can merge projection inverse matrix and modelview inverse matrix and do the division by `worldPosition.w` quite at the end. – the swine Nov 22 '16 at 10:50

0 Answers0