I'm creating a 2D texture and am getting confused on what data type my texture is actually being stored at. My input is a 2D 32-bit float array that is normalized to 0-1 for now. I'm creating a texture with a call resembling this:
glTexImage2D(self._target, 0, GL_LUMINANCE, GL_LUMINANCE, gl.GL_UNSIGNED_BYTE, (2048, 8192))
And pushing data with something resembling this:
glTexSubImage2D(self._target, 0, x, y, GL_LUMINANCE, GL_FLOAT, data)
I understand that since I'm using luminance the data is clamped to 0 to 1 as floats (which is why I'm normalizing it) and is made available to me in GLSL as a vec4 (L, L, L, 1). But what data type is actually used to store that float? Is it stored as a single 32-bit float and then making it look like a vec4 in GLSL?
I ask because if I was to switch to GL_R32F or something like that would my texture take up the same amount of video memory as doing luminance? Is there any way to not clamp the luminance data from 0 to 1. And is there a common way other than adding an alpha to do "fill" values (values that indicate a texel should not be rendered, like NaN or -9999.0 or something)?
Thanks for any help. I'm using the vispy python package that wraps pyopengl.