0

I'm trying to pass an array of ints into the fragment shader by using a 1D texture. Although the code compiles and runs, when I look at the texture values in the shader, they are all zero!

This is the C++ code I have after following a number of tutorials:

GLuint texture;
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + 5); // use the 5th since first 4 may be taken
glBindTexture  (GL_TEXTURE_1D, texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RED_INTEGER, myVec.size(), 0, 
                               GL_RED_INTEGER, GL_INT, &myVec[0]);

GLint textureLoc =  glGetUniformLocation( program, "myTexture" );
glUniform1i(textureLoc, 5); 

And this how I try and access the texture in the shader:

uniform sampler1D myTexture; 
int dat = int(texture1D(myTexture, 0.0).r); // 0.0 is just an example 
if (dat == 0) { // always true!

I'm sure this is some trivial error on my part, but I just can't figure it out. I'm unfortunately constrained to using GLSL 1.20, so this syntax may seem outdated to some.

So why are the texture values in the shader always zero?

EDIT:

If I replace the int's with floats, I still have a problem:

std::vector <float> temp; 
// fill temp...
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage1D(GL_TEXTURE_1D, 0, GL_R32F, temp.size(), 0, GL_R32F, GL_FLOAT, &temp[0]);
// ...
glUniform1f(textureLoc, 5);

This time, just reading from the sampler seems to mess up the other textures..

nbubis
  • 2,304
  • 5
  • 31
  • 46
  • I think the result is normalized to [0,1] float for int formats texture fetches. Try a `isampler1D` sampler – a.lasram Dec 19 '13 at 00:34
  • @AndonM.Coleman - `isampler1D` won't compile, and `GL_R16I` and `GL_R32I` don't change anything. – nbubis Dec 19 '13 at 00:34
  • @AndonM.Coleman - It doesn't seem to work with floats either - I've added the code to the question. Thank you again for your time! – nbubis Dec 19 '13 at 00:50
  • See my answer, and do not use a floating-point texture. Use a regular fixed-point (unsigned normalized is the technical term) texture. So something like `glTexImage1D (GL_TEXTURE_1D, 0, GL_R8, myVec.size (), GL_RED, GL_UNSIGNED_BYTE, &myVec [0]);`. You will need to make adjustments to the size and type and the GLSL shader if you need to store more than 256 values. – Andon M. Coleman Dec 19 '13 at 00:56

1 Answers1

2

To begin with, GL_RED_INTEGER is wrong for the internal format. I would use GL_R32I (32-bit signed integer) instead, you could also use GL_R8I or GL_R16I depending on your actual storage requirements - smaller types are generally better. Also, do not use a sampler1D for an integer texture, use isampler1D.

Since OpenGL ES does not support data type conversion when you use a pixel transfer function (e.g. glTexture2D (...)), you can usually find the optimal combination of format, internal format and type in a table if you look through the OpenGL ES docs.


You cannot use integer textures in OpenGL 2.1, if we are back to the same problem you were having yesterday with UBOs. If you cannot get a core profile context on OS X, you are limited to GLSL 1.2 (OpenGL 2.1). The constants for GL_R32I, GL_RED_INTEGER, etc. will be defined, but they should create GL_INVALID_ENUM errors at runtime in OS X's OpenGL 2.1 implementation.

That is not to say you cannot pack an integer value into a standard fixed-point texture and get an integer value back out in a shader. If you use an 8-bit per component format (e.g. GL_R8) you can store values in the range 0 - 255. In your shader, after you do a texture lookup (I would use GL_NEAREST for the texture filter, filtering will really mess things up) you can multiply the floating-point result by 255.0 and cast to int. It is far from perfect, but we got along fine without integer textures for many years.

Here is a modification to your shader that does exactly that:

#version 120

uniform sampler1D myTexture;
int dat = (int)(texture1D (myTexture, (idx + 0.5)/(float)textureWidth).r * 255.0);
if (dat == 0) { // not always true!

Assumes GL_R8 for the internal format, use 65535.0 for GL_R16

Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • `texelFetch` won't compile, and `float dat = texture1D(myTexture, 0.0).r * 65535.0;` always gives zero as well :( – nbubis Dec 19 '13 at 01:17
  • @Nathaniel which `version` directive are you using? – vallentin Dec 19 '13 at 01:19
  • 1
    @Nathaniel: Yeah, I forgot `texelFetch (...)` is not in GLSL 1.20 for a minute there. I will have an updated answer shortly, you are going to have to do some math to compute the floating-point texture coordinate from your integer index. This math will require you to know the dimensions of your texture in the shader. See my comment about the texture filter - it needs to be `GL_NEAREST` for this sort of thing to work. – Andon M. Coleman Dec 19 '13 at 01:20
  • the result is still always zero - I'm tearing my hair out :( – nbubis Dec 19 '13 at 02:45
  • @Nathaniel: Can you show the call you used to supply data to your 1D texture? I think you are using invalid enums, you were in the last edit to your question anyway (`format` should be **GL_RED** in that case - sized formats like **GL_R32F** are only for `internalFormat`). – Andon M. Coleman Dec 19 '13 at 02:50
  • Using this `glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, (temp).size(), 0, GL_R8, GL_UNSIGNED_BYTE, &(temp)[0]);` as you suggested. – nbubis Dec 19 '13 at 02:53
  • @Nathaniel: That is not actually what I suggested, the second instance of **GL_R8** needs to be **GL_RED**. Only the first **GL_R8** (`internalFormat`) and **GL_UNSIGNED_BYTE** (`type`) constants will need to be changed if you decide to use a larger data type for your texture, **GL_RED** will always be the correct value to use for `format`. If you make a habit of calling `glGetError (...)` after attempting these changes, that might help a lot. The code you just mentioned, for instance, should generate **GL_INVALID_ENUM** – Andon M. Coleman Dec 19 '13 at 02:56