glViewport: width/height are integers (which are pixels).
But glViewportIndexed has these values in float. What are the advantages of having them in float. My understanding is based on the fact that pixels are always integers.
glViewport: width/height are integers (which are pixels).
But glViewportIndexed has these values in float. What are the advantages of having them in float. My understanding is based on the fact that pixels are always integers.
It may look like the glViewport*()
calls specify pixel rectangles. But if you look at the details of the OpenGL rendering pipeline, that's not the case. They specify the parameters for the viewport transformation. This is the transformation that maps normalized device coordinates (NDC) to window coordinates.
If x
, y
, w
and h
are your specified viewport dimensions, xNdc
and yNdc
your NDC coordinates, the viewport transformation can be written like this:
xWin = x + 0.5 * (xNdc + 1.0) * w;
yWin = y + 0.5 * (yNdc + 1.0) * h;
In this calculation, xNdc
and yNdc
are of course floating point values, in their usual (-1.0, 1.0) range. I do not see any good reason why x
, y
, w
and h
should be restricted to integer values in this calculation. This transformation is applied before rasterization, so there is no need to round anything to pixel units.
Not needing integer values for the viewport dimensions could even be practically useful. Say you have a window of size 1000x1000, and you want to render 9 sub-views of equal size in the window. There's no reason for the API to stop you from doing what's most natural: Make each sub-view the size 333.3333x333.3333, and use those sizes for the parameters of glViewport()
.
If you look at glScissorIndexed()
for comparison, you will notice that it still takes integer coordinates. This makes complete sense, because gScissor()
does in fact specify a region of pixels in the window, unlike glViewport()
.
Answering your new questions in comments would have proved difficult, so even though Reto Koradi has already answered your question I will attempt to answer them here.
@AndonM.Coleman, ok got it. But then why is glViewport have x,y,w,h in integers?
Probably because back when glViewport (...)
was created, there was no programmable pipeline. Even back then, sub-pixel offsets were sometimes used (particularly when trying to match rasterization coverage rules for things like GL_LINES
and GL_TRIANGLES
) but they had to be applied to the transformation matrices.
Now you can do the same thing using the viewport transform instead, which is a heck of a lot simpler (4 scalars needed for the viewport) than passing a giant mat4
(16 scalars) into a Geometry Shader.
Does it apply the viewport transformation to all the viewports or only the first viewport.
GL_ARB_viewport_array
extension specification:
glViewport
sets the parameters for all viewports to the same values and is equivalent (assuming no errors are generated) to:for (GLuint i = 0; i < GL_MAX_VIEWPORTS; i++) glViewportIndexedf(i, 1, (GLfloat)x, (GLfloat)y, (GLfloat)w, (GLfloat)h);
@AndonM.Coleman, 2nd question: if VIEWPORT_SUBPIXEL_BITS returns value 4, then will gl_FragCoord.xy have values with offsets (0,0) (0.5, 0) (0, 0.5) and (0.5, 0.5) ?
If you have 4-bits of sub-pixel precision then what that means is that vertex positions after transformation will be snapped to a position 1/16th the width of a pixel. GL actually does not require any sub-pixel bits here; in such a case your vertex positions after transformation into window-space would jump by 1 pixel distances at a time and you would see a lot of "sparklies" as you move anything in the scene.
See the white dots as the camera moves? If you do not have enough sub-pixel precision when you transform your vertices, the rasterizer has difficulty properly dealing with edges that are supposed to be adjacent. The technical term for this is T-Junction Error, but I am quite fond of the word "sparkly" ;)
As for gl_FragCoord.xy
, no that is actually unaffected by your sub-pixel precision during vertex transform. That is the sample location within your fragment (usually aligned to ... + 0.5 as you point out), and it is unrelated to vertex processing.