2

The article here says:

Dividing x, y, and z by w accomplishes this. The resulting coordinates are called normalized device coordinates. Now all the visible geometric data lies in a cube with positions between <-1, -1, -1> and <1, 1, 1> in OpenGL, and between <-1, -1, 0> and <1, 1, 1> in Direct3D.

This raises a problem for cross-platform shaders which want to test the Z coordinate for some specific reason. Is there a way to get a Z coord in the same range, regardless of platform?

Mr. Boy
  • 60,845
  • 93
  • 320
  • 589

3 Answers3

1

Using the nonlinear z/w value of NDC space is normally avoided. One normally does this by passing the absolute vertex Z distance by an additional varying. That way things stay portable.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • This answer seems like it could be really helpful, but I can't quite understand it! Can you (or someone) clarify as the language is confusing me? – Mr. Boy Nov 14 '11 at 21:09
  • @John: What part of that "language" is confusing you? If you want a linear Z value, you just get a linear Z value from the vertex shader and pass it along manually. – Nicol Bolas Nov 14 '11 at 22:17
  • @NicolBolas - _"by passing the absolute vertex Z distance by an additional varying"_ doesn't make sense, it's not proper English. – Mr. Boy Nov 15 '11 at 08:59
  • 1
    @John: "varying" is not a adjective in that sentence, but a noun. – datenwolf Nov 15 '11 at 09:22
  • @datenwolf: "varying" _isn't_ a noun, you can't make it into one :). I think what you are saying is "One normally does this by passing the absolute vertex Z distance + an additional value"? Maybe you can add a concrete/algebra example to clarify. – Mr. Boy Nov 15 '11 at 15:32
  • 2
    @John: In this very context it is. In GLSL a "Varying" is a special kind of variable passed from one shader stage to the next shader stage. – datenwolf Nov 15 '11 at 15:40
1

Interresting question, but I doubt that it's achievable, since the viewport transformation is still fixed-function.

The first thing that comes to mind is to use glDepthRange (or its possible D3D counterpart) to change the mapping from NDC z to depth. But this won't work, since passing [-1,1] to glDepthRange will just clamp it to [0,1] and neither can you set it in D3D to [0.5,1], since before that everything will still be clipped against [0,1].

But I don't think you need it too often, since in the fragment/pixel shader you get window coordinates with a normalized [0,1] depth (I expect Cg to behave similar to GLSL here). And in the vertex shader you would more often need the world or view space depth anyway, instead of the NDC z. If you really need it you may just base the decision on a preprocessor definition in the shader.

Christian Rau
  • 45,360
  • 10
  • 108
  • 185
0

Where are you doing this testing of Z that you want to do?

If you're doing it in the fragment shader, then you shouldn't care. gl_FragCoord.z or whatever Cg's equivalent to this is in window-space. The window-space Z extent is defined by glDepthRange; by default, it goes from 0 to 1.

If you're doing this test in the vertex shader, then you'll just have to live with it. A better test might be one done in camera space, rather than clip-space or NDC space. At least then, you're usually dealing with world-sized distances.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • I was a bit confused by the discussion how fragment shaders have a different Z range. Isn't the point that the vertex shader emits the Z (and all other values) interpolated across the poly and _used by_ the fragment shader? If not, what stage(s) have I missed out? – Mr. Boy Nov 14 '11 at 21:08
  • @John: The positions output by the vertex shader is in homogeneous clip-space coordinates. These positions are then clipped, perspective-divided, and then transformed into window space. That's what `glViewport` and `glDepthRange` define: the NDC-to-window space. [Details can be found here.](http://www.arcsynthesis.org/gltut/Basics/Intro%20Graphics%20and%20Rendering.html) – Nicol Bolas Nov 14 '11 at 22:15
  • In my case, I'm using an orthographic projection. Does that mean the positions output by my VS will be the same as passed into the PS stage? They seem to be, X/Y at least. – Mr. Boy Nov 15 '11 at 09:04
  • @John: I don't know how Cg or HLSL handles it, but in OpenGL, `gl_FragCoord` is in *window* space. So it is relative to the viewport you established with `glViewport` and `glDepthRange`. So no, it will not. – Nicol Bolas Nov 15 '11 at 19:21
  • yes my mistake. I thought we always worked in [0,1] range , i.e. independent of render-target dimensions. – Mr. Boy Nov 16 '11 at 14:10