I have a std::container
of image data in the format of:
struct C
{
float r;
float g;
float b;
float a;
}
so it looks like:
std::vector<C> colours(width*height);
where width and height are the size of my image.
Now, I want to push this into a Texture in OpenGL.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X,0,GL_RGBA8,width,height,0,GL_RGBA,GL_FLOAT,colours.data());
(there's 6 in total, one for each side of my cubemap. The width and height are also identical as required by cubemaps).
But I'm getting a black texture. GLIntercept gives me no indication of a problem.
So, after reviewing https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glTexImage2D.xml I believe I should be calling it like:
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X,0,GL_RGBA8,width,height,0,GL_RGBA32F,GL_FLOAT,colours.data());
The change being that I'm indicating that my pixel data is 32bit floats, and there's 4 per pixel.
However, this gives me a black texture as well, along with GLIntercept telling me this call is generating a GL_INVALID_ENUM
.
Reviewing the same documentation tells me that glTexImage2D
will cause a GL_INVALID_ENUM
under some conditions, none of which I've met.
Basically, I just want to get my container of 4 floats into a texture.