0

I have a std::container of image data in the format of:

struct C
{
    float r;
    float g;
    float b;
    float a;
}

so it looks like:

std::vector<C> colours(width*height);

where width and height are the size of my image.

Now, I want to push this into a Texture in OpenGL.

glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X,0,GL_RGBA8,width,height,0,GL_RGBA,GL_FLOAT,colours.data());

(there's 6 in total, one for each side of my cubemap. The width and height are also identical as required by cubemaps).

But I'm getting a black texture. GLIntercept gives me no indication of a problem.

So, after reviewing https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glTexImage2D.xml I believe I should be calling it like:

glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X,0,GL_RGBA8,width,height,0,GL_RGBA32F,GL_FLOAT,colours.data());

The change being that I'm indicating that my pixel data is 32bit floats, and there's 4 per pixel.

However, this gives me a black texture as well, along with GLIntercept telling me this call is generating a GL_INVALID_ENUM.

Reviewing the same documentation tells me that glTexImage2D will cause a GL_INVALID_ENUM under some conditions, none of which I've met.

Basically, I just want to get my container of 4 floats into a texture.

NeomerArcana
  • 1,978
  • 3
  • 23
  • 50

1 Answers1

1

I think you should rather try

glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA32F,
             width, height, 0, GL_RGBA, GL_FLOAT, colours.data());

Seems that you used internalformat and format in a wrong way.

tonso
  • 1,760
  • 1
  • 11
  • 19
  • Nope, that's not it. You're right though, I had the `internalFormat` and `format` confused. But that didn't fix it. As I understand it, `internalformat` doesn't matter much as OpenGL will convert whatever I send it *into* that format. It's the `format` of `GL_RGBA` that says there will be 4 components, and the `type` which says each component will be a `GL_FLOAT`. Still not sure what I'm doing wrong... – NeomerArcana Apr 06 '15 at 10:13
  • Actually `internalformat` defines how OGL will store texels internally, and `format` defines the format of input texel data. BTW in OGL ES 2.0 `internalformat` must be equal to `format`, so maybe you should also try to use `GL_RGBA32F` for both `internalformat` and `format`. – tonso Apr 06 '15 at 10:45
  • the documentation clearly states what the options are, that isn't one of the options for regular OpenGL. I believe I've found the answer, but I'm debugging right now. I think the problem is at an earlier stage; unrelated to the above. – NeomerArcana Apr 06 '15 at 10:50