I'm trying to load a 2bpp image format into OpenGL textures. The format is just a bunch of indexed-color pixels, 4 pixels fit into one byte since it's 2 bits per pixel.
My current code works fine in all cases except if the image's width is not divisible by 4. I'm not sure if this has something to do with the data being 2bpp, as it's converted to a pixel unsigned byte array (GLubyte raw[4096]
) anyway.
16x16? Displays fine.
16x18? Displays fine.
18x16? Garbled mess.
22x16? Garbled mess.
etc.
Here is what I mean by works VS. garbled mess (resized to 3x):
Here is my code:
GLubyte raw[4096];
std::ifstream bin(file, std::ios::ate | std::ios::binary | std::ios::in);
unsigned short size = bin.tellg();
bin.clear();
bin.seekg(0, std::ios::beg);
// first byte is height; width is calculated from a combination of filesize & height
// this part works correctly every time
char ch = 0;
bin.get(ch);
ubyte h = ch;
ubyte w = ((size-1)*4)/h;
printf("%dx%d Filesize: %d (%d)\n", w, h, size-1, (size-1)*4);
// fill it in with 0's which means transparent.
for (int ii = 0; ii < w*h; ++ii) {
if (ii < 4096) {
raw[ii] = 0x00;
} else {
return false;
}
}
size_t i;
while (bin.get(ch)) {
// 2bpp mode
// take each byte in the file, split it into 4 bytes.
raw[i] = (ch & 0x03);
raw[i+1] = (ch & 0x0C) >> 2;
raw[i+2] = (ch & 0x30) >> 4;
raw[i+3] = (ch & 0xC0) >> 6;
i = i + 4;
}
texture_sizes[id][1] = w;
texture_sizes[id][2] = h;
glGenTextures(1, &textures[id]);
glBindTexture(GL_TEXTURE_2D, textures[id]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLenum fmt = GL_RED;
GLint swizzleMask[] = { GL_RED, GL_RED, GL_RED, 255 };
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
glTexImage2D(GL_TEXTURE_2D, 0, fmt, w, h, 0, fmt, GL_UNSIGNED_BYTE, raw);
glBindTexture(GL_TEXTURE_2D, 0);
What's actually happening for some reason, the image is being treated as if it's 20x24; OpenGL(probably?) seems to be forcefully rounding the width up to the nearest number that's divisible by 4. That would be 20. This is despite the w
value in my code being correct at 18, it's as if OpenGL is saying "no, I'm going to make it be 20 pixels wide internally."
However since the texture is still being rendered as an 18x24 rectangle, the last 2 pixels of each row - that should be the first 2 pixels of the next row - are just... not being rendered.
Here's what happens when I force my code's w
value to always be 20, instead of 18. (I just replaced w = ((size-1)*4)/h
with w = 20
):
And here's when my w value is 18 again, as in the first image:
As you can see, the image is a whole 2 pixels wider; those 2 pixels at the end of every row should be on the next row, because the width is supposed to be 18, not 20!
This proves that for whatever reason, internally, the texture bytes were parsed and stored as if they were 20x24 instead of 18x24. Why that is, I can't figure out, and I've been trying to solve this specific problem for days. I've verified that the raw bytes and everything are all the values I expect; there's nothing wrong with my data format. Is this an OpenGL bug? Why is OpenGL forcing internally storing my texture as 20x24, when I clearly told it to store it as 18x24? The rest of my code recognizes that I told the width to be 18 not 20, it's just OpenGL itself that doesn't.
Finally, one more note: I've tried loading the exact same file, in the exact same way with the LÖVE framework (Lua), exact same size and exact same bytes as my C++ version and all. And I dumped those bytes into love.image.newImageData and it displays just fine!
That's the final proof that it's not my format's problem; it's very likely OpenGL's problem or something in the code above that I'm overlooking.
How can I solve this problem? The problem being that OpenGL is storing textures internally with an incorrect width value (20 as opposed to the value of 18 that I gave the function) therefore loading the raw unsigned bytes incorrectly.