0

I'm creating a 2D texture and am getting confused on what data type my texture is actually being stored at. My input is a 2D 32-bit float array that is normalized to 0-1 for now. I'm creating a texture with a call resembling this:

glTexImage2D(self._target, 0, GL_LUMINANCE, GL_LUMINANCE, gl.GL_UNSIGNED_BYTE, (2048, 8192))

And pushing data with something resembling this:

glTexSubImage2D(self._target, 0, x, y, GL_LUMINANCE, GL_FLOAT, data)

I understand that since I'm using luminance the data is clamped to 0 to 1 as floats (which is why I'm normalizing it) and is made available to me in GLSL as a vec4 (L, L, L, 1). But what data type is actually used to store that float? Is it stored as a single 32-bit float and then making it look like a vec4 in GLSL?

I ask because if I was to switch to GL_R32F or something like that would my texture take up the same amount of video memory as doing luminance? Is there any way to not clamp the luminance data from 0 to 1. And is there a common way other than adding an alpha to do "fill" values (values that indicate a texel should not be rendered, like NaN or -9999.0 or something)?

Thanks for any help. I'm using the vispy python package that wraps pyopengl.

djhoese
  • 3,567
  • 1
  • 27
  • 45
  • `GL_R32F`, unlike `GL_LUMINANCE`, is not a normalized type; it can have values outside 0 and 1. `GL_LUMINANCE` likely uses 1bpp internally (I don't know if it's specified anywhere; it's deprecated anyway), while `GL_R32F` is 4bpp. – Colonel Thirty Two Oct 21 '15 at 15:39
  • So if luminance is 1bpp does that mean it is mapping a 0-1 float to a 255 unsigned integer? – djhoese Oct 21 '15 at 16:16
  • Yes, that's what a normalized texture means; sampling it will always produce a floating-point value between 0 and 1. `GL_R32F` is not normalized; it stores (and gives you) an arbitrary floating-point value. – Colonel Thirty Two Oct 21 '15 at 16:20
  • So this?: https://www.opengl.org/wiki/Normalized_Integer – djhoese Oct 21 '15 at 16:28
  • @ColonelThirtyTwo if you want to make an answer talking about normalized integers and bpps I'll accept it. – djhoese Oct 22 '15 at 13:18

1 Answers1

0

Interestingly since I originally asked this question I've not become the maintainer of the vispy library. Here's the answer to my own question just to close this out. A lot of the answer comes from here and was originally answered in the comments:

https://www.khronos.org/opengl/wiki/Image_Format

In the texture you can either specify a type that is a normalized unsigned or signed integer, floating point, or signed or unsigned integer (not normalized). The normalized types will take whatever data you give them and normalize them to 0-1 based on the data type of that data. So if you give it uint8 data it will divide by 255. If you give it uint16 it will divide by 65535. Regardless of the type you will see 0-1 floating point numbers in the shader. The other non-normalized types (floats, sign/unsigned non-normalized integers) will be in their original range.

djhoese
  • 3,567
  • 1
  • 27
  • 45