4

Ok so I need to create my own texture/image data and then display it onto a quad in OpenGL. I have the quad working and I can display a TGA file onto it with my own texture loader and it maps to the quad perfectly.

But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel? What is the format of the texture array, how do I for example set pixel (100,100) to black?

This is how I would imagine it for a completely white image/texture:

#DEFINE SCREEN_WIDTH 1000
#DEFINE SCREEN_HEIGHT 1000

unsigned int* texdata = new unsigned int[SCREEN_HEIGHT * SCREEN_WIDTH * 3];
for(int i=0; i<SCREEN_HEIGHT * SCREEN_WIDTH * 3; i++)
        texdata[i] = 255;

GLuint t = 0;
glEnable(GL_TEXTURE_2D);
glGenTextures( 1, &t );
glBindTexture(GL_TEXTURE_2D, t);

// Set parameters to determine how the texture is resized
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_LINEAR_MIPMAP_LINEAR );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MAG_FILTER , GL_LINEAR );
// Set parameters to determine how the texture wraps at edges
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_S , GL_REPEAT );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_T , GL_REPEAT );
// Read the texture data from file and upload it to the GPU
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH, SCREEN_HEIGHT, 0,
             GL_RGB, GL_UNSIGNED_BYTE, texdata);
glGenerateMipmap(GL_TEXTURE_2D);

EDIT: Below answers are correct but I also found that OpenGL doesn't handle normal ints which I used but it works fine with uint8_t. I assume it's because of the GL_RGB together with the GL_UNSIGNED_BYTE (which is only 8 bits and a normal int is not 8 bit) flag that I use when I upload to GPU.

Jackbob
  • 61
  • 1
  • 6
  • I don't understand, have you tried doing that? You just need to pass a big chunk of bytes that have your data instead of the image data. Post code that shows how you pass data from you image loader into a GL texture and then your attempt to pass self made image data to a texture and tell us what isn't working. – zero298 Oct 19 '17 at 15:30
  • I realised that you need to specifically create unsigned 8 bit integers for OpenGL to interpret them correctly as RGB values. I have literally been trying this for days and managed to solve it like 2 min after I decided to post this.... – Jackbob Oct 19 '17 at 15:45
  • Eh, `GL_UNSIGNED_INT` is a valid `glTexImage2D()` '`type`' too, you'd just have to expand your color channel range from [0, 2^8-1] to [0, 2^32-1]. – genpfault Oct 19 '17 at 15:48
  • Yes I came to that conclusion aswell. I added an edit in my original post to clarify – Jackbob Oct 19 '17 at 15:58

2 Answers2

10

But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel?

std::vector< unsigned char > image( 1000 * 1000 * 3 /* bytes per pixel */ );

What is the format of the texture array

Red byte, then green byte, then blue byte. Repeat.

how do I for example set pixel (100,100) to black?

unsigned int width = 1000;
unsigned int x = 100;
unsigned int y = 100;
unsigned int location = ( x + ( y * width ) ) * 3;
image[ location + 0 ] = 0; // R
image[ location + 1 ] = 0; // G
image[ location + 2 ] = 0; // B

Upload via:

// the rows in the image array don't have any padding
// so set GL_UNPACK_ALIGNMENT to 1 (instead of the default of 4)
// https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D
    (
    GL_TEXTURE_2D, 0,
    GL_RGB, 1000, 1000, 0,
    GL_RGB, GL_UNSIGNED_BYTE, &image[0]
    );
genpfault
  • 51,148
  • 11
  • 85
  • 139
  • You are welcome! I also feel that explaining the unpack alignment thing would be nice. – lisyarus Oct 19 '17 at 15:37
  • Thanks! I managed to solve it aswell after been trying for days. It seems I cant use normal unsigned ints but rather used uint8_t to create 8 bits because OpenGL seems unfamiliar with too large variables and how to handle them. – Jackbob Oct 19 '17 at 15:52
  • 2
    @Jackbob You can use unsigned ints, but you'll have to specify `GL_UNSIGNED_INT` rather then `GL_UNSIGNED_BYTE` in this case. However, I doubt that you really need this: one byte for channel is in most cases good enough for a human eye. – lisyarus Oct 19 '17 at 16:05
  • 2
    The reference to `GL_UNPACK_ALIGNMENT` is really great; thanks! – Samuel Li Aug 10 '18 at 20:54
4

By default, each row of a texture should be aligned to 4 bytes. The texture is an RGB texture, which needs 24 bits or 3 bytes for each texel and the texture is tightly packed especially the rows of the texture. This means that the alignment of 4 bytes for the start of a line of the texture is disregarded (except 3 times the width of the texture is divisible by 4 without a remaining).

To deal with that the alignment has to be changed to 1. This means the GL_UNPACK_ALIGNMENT paramter has to be set before loading a tightly packed texture to the GPU (glTexImage2D):

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

Otherwise an offset of 0-3 bytes per line is gained, at texture lookup. This causes a continuously twisted or tilted texture.

Since you use the soure format GL_RGB in GL_UNSIGNED_BYTE, each pixel consits of 3 color channels (red, green and blue) and each color channel is stored in one byte in range [0, 255].

If you want to set a pixel at (x, y) to the color R, G and B, the this is done like this:

texdata[(y*WIDTH+x)*3+0] = R;
texdata[(y*WIDTH+x)*3+1] = G;
texdata[(y*WIDTH+x)*3+2] = B;
Rabbid76
  • 202,892
  • 27
  • 131
  • 174