Getting into opengl since a short while, I use a fragment shader to render images loaded as textures, with some pixel-wise transformations like brightness and contrasts. I am using openGL 4.1, on Mac Os X , with intel Iris graphics.
Now I need to be able to quickly see through large images (>40 Mpx images). I'd like to simply animate them as fast possible, in addition to stop within any of them and do some GPU work on them, avoiding any round trip to the CPU.
Currently, dealing with one image at a time, my fragment shader is as follows:
#version 410 core
in mediump vec3 ourColor;
in mediump vec2 TexCoord;
out vec4 color;
uniform mediump sampler2D ourTexture;
uniform mediump float alpha;
uniform mediump float beta;
uniform mediump float gamma;
void main()
{
mediump vec4 textureColor = texture(ourTexture, TexCoord);
mediump vec3 scaledRGB = alpha * textureColor.rgb + beta;
mediump vec3 gammaVec = vec3(gamma);
scaledRGB = pow(scaledRGB, gammaVec);
color = vec4(scaledRGB, textureColor.a);
}
I have read so far (e.g. here and there) that one would use arrays of samplers, and using uniform index for telling which texture will actually be in use, with the limitation set by GL_MAX_TEXTURE_IMAGE_UNITS regarding the max number of textures that we can use "at once" in the GPU (unless I misunderstood?). Is there a way to bypass this texture limitation? i.e. any way to send as many 2D arrays of floats to the GPU as the memory allows it, regardless of the texture image units limit, that I can later use as textures with the shaders? The hardware i'm targeting have GL_MAX_TEXTURE_IMAGE_UNITS = 16 (on my Intel Iris) or 32 typically, which is ridiculously small for time series of images that can be up to several hundreds. I am expecting that with recent GPUs of up to 2 GB of RAM, there must be a way to store more images than GL_MAX_TEXTURE_IMAGE_UNITS.
Am I missing a very simple alternative?
Thanks