0

I am trying to use 2 mipmapped textures within a fragment shader. I cannot combine them prior to loading as I must load the textures from separate sources (i.e. separate files). During rendering, I have to choose between texture data from one or the other sampler based on an attribute. For more context, I am working with 2 dual-channel images (luminance and alpha).

Trying to access a mipmapped texture within a non-uniform conditional block (i.e. conditional that cannot be evaluated at compile-time) gives an undefined value. As such, one approach is to sample the first two channels from each and then choose the right one based on a conditional afterwards (or branchless approach, but it doesn't matter).

As an alternative, I was wondering if it is possible to combine these two dual-channel images during initialization into one four-channel image (i.e. first two are image one, second two are image two) and just always grab all four channels regardless, eliminating the extra overhead. Is this possible?

Here is an example fragment shader showing approach 1:

#version 100

precision mediump float;

varying vec2 v_objectTexCoord;
varying vec3 v_objectColorRGB;
varying float v_objectType; // 0.0f == first texture, 1.0f == second texture

uniform sampler2D u_tex_sampler_1;
uniform sampler2D u_tex_sampler_2;

// NOTE: this shader may only be compatible with OpenGL ES 2.0 at this time!
void main()
{
    // The texture's red channel is luminance, and texture's green channel is alpha.
    vec4 object_texture_1 = texture2D(u_tex_sampler_1, v_objectTexCoord);
    vec4 object_texture_2 = texture2D(u_tex_sampler_2, v_objectTexCoord);

    float luminance = object_texture_1.r * (1.0 - v_objectType) + object_texture_1.r * v_objectType;
    float alpha = object_texture_1.a * (1.0 - v_objectType) + object_texture_1.a * v_objectType;

    // If the texture is transparent, discard the fragment entirely.
    if (alpha < 0.01)
    {
        discard;
    }

    // Object's color is based on the color attribute, with luminance and alpha applied.
    vec4 color = vec4(v_objectColorRGB * luminance, alpha)
    gl_FragColor = color;
}
Rabbid76
  • 202,892
  • 27
  • 131
  • 174
  • I'm having trouble understanding why you can not combine the texture on the host side (e.g. read each file into separate buffers and then combine into a single buffer and load that into texture memory). Do they have the same dimensions? – wcochran Aug 01 '23 at 02:16
  • They aren't ensured to be the same dimensions. But this is totally a feasible approach, you are right. – Michael Buerger Aug 01 '23 at 15:59

1 Answers1

0

Yes, you can merge the textures on the CPU at load-time prior to uploading them to the graphics API, and then just treat the merged image as RGBA data in the shader.

solidpixel
  • 10,688
  • 1
  • 20
  • 33