3

I have a bit array representing an image mask, stored in a uint8_t[] container array, in row first order. Hence, for each byte, I have 8 pixels.

Now, I need to render this with OpenGL ( >= 3.0 ). A positive bit is drawn as a white pixel and a negative bit is drawn as a black pixel.

How could I do this? Please

The first idea that comes to mind is to develop a specific shader for this. Can anyone give some hints on that?

manatttta
  • 3,054
  • 4
  • 34
  • 72
  • Which OpenGL version are you even using? If you use one of the older ones with fixed-function pipeline (OpenGL <= 2.1) you wouldn't even need to write a shader (and it should be "good enough" for your use case) – UnholySheep Oct 15 '16 at 14:37
  • Since OpenGL takes a minimum of 8 bits per texel it's very likely that you need to convert the mask to a grayscale image. – pleluron Oct 15 '16 at 14:38
  • The 4.5 spec mentions STENCIL_INDEX1. Not sure what you will use it for so that may not work for you. – Andreas Oct 15 '16 at 14:41
  • "*Now, I need to render this as a 2D image/texture*" Are you trying to sample from such an image or *render to* such an image? "*in row first order.*" Does this guarantee that each byte comes from a specific row? That is, is the image guaranteed to have a width divisible by 8? – Nicol Bolas Oct 15 '16 at 14:43
  • @NicolBolas I'm not sure I understand your question. I want to display this as a white/black pixel 2D image – manatttta Oct 15 '16 at 14:44
  • @manatttta: just FYI: "sampling from an image" means that you want to read from the image data and turn it into picture pixels. "render to an image" means that you have some picture pixels (or are producing them in a program) and want to store them in the data format you have there. – datenwolf Oct 15 '16 at 15:23

2 Answers2

6

You definitely must write a shader for this. First and foremost you want to prevent the OpenGL implementation to reinterpret the integer bits of your B/W bitmap as numbers in a certain range and map them to [0…1] floats. Which means you have to load your bits into an integer image format. Since your image format is octet groups of binary pixels (byte is a rather unspecific term and can refer to any number of bits, though 8 bits is the usual), a single channel format 8 bits format seems the right choice. The OpenGL-3 moniker for that is GL_R8UI. Keep in mind that the "width" of the texture will be 1/8th of the actual width of your B/W image. Also for unnormalized access you must use a usampler (for unsigned) or an isampler (for signed) (thanks @derhass for noticing that this was not properly written here).

To access individual bits you use the usual bit manipulation operators. Since you don't want your bits to become filtered, texel fetch access must be used. So to access the binary pixel at integer location x,y the following would be used.

uniform usampler2D tex;

uint shift = x % 8;
uint mask = 1 << shift;
uint octet = texelFetch(tex, ivec2(x/8,y)).r;
value = (octet & mask) >> shift;
datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • 3
    If you use an unnormalized unsigened integer format, you should also use an `usampler`... – derhass Oct 15 '16 at 15:59
  • @derhass: fixed – datenwolf Oct 15 '16 at 16:36
  • @manatttta: Texture coordinates in pixels. You have to pass them as vertex attribute (into the vertex shader) and from there use an `out` varying to pass it on to the fragment shader. – datenwolf Oct 18 '16 at 17:00
  • Thank you. Still, I have another question (sorry for the abuse). I am using images with > 20k pixels. Still when creating a texture, I am passing its size as the original image width and height. Since a typical gpu will allow something like 8k pixels, can I hack the texture size in order to pass the whole image? – manatttta Oct 18 '16 at 17:13
  • @manatttta: There are several ways around this. You could use multiple textures. You could use a 2D array texture (i.e. several 2D layers that are not blended, as is the case with 3D textures and use the layer index to iterate through tiles). And last but not least you could use sparse textures (https://www.opengl.org/registry/specs/ARB/sparse_texture.txt), which allows you to define a "virtual texture" where only the currently requied parts are loaded on demand by your program. – datenwolf Oct 18 '16 at 17:45
  • Worked. Just had to make my texture pixelFormat to be `GL_RED_INTEGER`. Thnaks – manatttta Oct 18 '16 at 20:30
  • @datenwolf I'm trying to use textureArrays, using fixed texture height of 256 pixels, and now I'm accessing the octets as `uint octet = texelFetch(texture, ivec3(x / 8u, y % 256u, y / 256u), 0).r;` but this is not working. I thought that the `z` component of the vector would mean the texture I want, but this does not work and always uses the same texture. Do you have any ideia? – manatttta Oct 20 '16 at 21:25
  • @manatttta: Do you have some MCVE source code I could work with? – datenwolf Oct 20 '16 at 22:09
  • @datenwolf I am describing this in http://stackoverflow.com/questions/40176720/openscenegraph-opengl-image-internalformat Please ignore the part that concerns OpenSceneGRaph which is a wrapper Lib for OpenGl (unless you are familiar with it :) ) – manatttta Oct 21 '16 at 12:30
-1

The best solution would be to use a shader, you could also hack something like this:

std::bitset<8> bits = myuint;

Then get the values of the single bits with bits.at(position) and finally do a simple point drawing.

DLCom
  • 210
  • 1
  • 9