2

I use this fragent shader (inspired from some tutorial found on the NVIDIA site some time ago). It basically compute bi-linear interpolation of a 2D texture.

uniform sampler2D myTexture;
uniform vec2 textureDimension;

#define texel_size_x 1.0 / textureDimension[0]
#define texel_size_y 1.0 / textureDimension[1]

vec4 texture2D_bilinear( sampler2D texture, vec2 uv)
{
    vec2 f;
    uv = uv + vec2( - texel_size_x / 2.0, - texel_size_y / 2.0);

    f.x = fract( uv.x * textureDimension[0]);
    f.y = fract( uv.y * textureDimension[1]);

    vec4 t00 = texture2D( texture, vec2(uv));
    vec4 t10 = texture2D( texture, vec2(uv) + vec2( texel_size_x, 0));
    vec4 tA = mix( t00, t10, f.x);

    vec4 t01 = texture2D( texture, vec2(uv) + vec2( 0, texel_size_y));
    vec4 t11 = texture2D( texture, vec2(uv) + vec2( texel_size_x, texel_size_y));
    vec4 tB = mix( t01, t11, f.x);

    vec4 result = mix( tA, tB, f.y);
    return result;
}

It looks quite simple and strait-forward. I recently test it on several ATI cards (latest drivers ...) and I get the following result :

Original data (Nearest neighbor) Shader in use

(Left : Nearest neighbor) (Right : sharder in use)

As you can see some horizontal and vertical lines appear it's important to mention these are not fixed in the view-port coordinates neither in the texture coordinates.

I had to port several shaders to make them work correctly on ATI cards, it's seems NVIDIA implementation is a little more permissive regarding bad code I wrote some time. But in this case I don't see what I should change !

Anything general I should know about differences between NVIDIA and ATI GLSL implementation to overcome this ?

Thomas Vincent
  • 2,214
  • 4
  • 17
  • 25

2 Answers2

4

nVidia is more permissive, for example nVidia lets you cast wrongly (ie float4 to float) only making it a warning, ATI won't (error). There is a bigger difference if you use OpenGL than if you use DirectX, for example I had a quite complex vertex shader (matrix palette skinning) and even without the slightest warning it didn't work on ATI (but did on nVidia).

So if you wan't to make shaders to work 'everywhere' get a ATI card :-) (or better, an integrated chipset ^^).

Valmond
  • 2,897
  • 8
  • 29
  • 49
  • ps. the bug you got might be a problem of how the texture is treated at the texture lookup (ie. use CLAMP or MIRROR) or again an overflow, add all the lookups into a float2 (vec2) and multiply with 0.25 (ie. div by 4) instead of the mix() function. – Valmond Jun 29 '11 at 18:04
  • @Valmond if I divided all the time by 4 I think I am going to loose the bi-linear part of this bi-linear interpolation shader :) ... but I got your point I will try to do the *mix* part by hand. – Thomas Vincent Jun 29 '11 at 18:13
  • I might be explaining badly, just add the 4 texture lookups together and divide by 4 (as you do 4 lookups you need to divide by 4 if you add them together). That will do the bilinear interpolation (and no ovarflows). – Valmond Jun 29 '11 at 18:17
  • if that doesn't do it you might want to remove the 'fract()' call (just go with the value right away (ie. X instead of fract(X) ), combined with CLAMP. – Valmond Jun 29 '11 at 18:19
  • @Valmond: Adding the four texels together and dividing by four would produce a *constant* value across that area of the texture. It wouldn't be interpolation of any kind. – Nicol Bolas Jun 29 '11 at 18:26
  • Hmmm, effectively, if the fract value (f.x, f.y) isn't 0.5 then you are right, but then again it should be ( 0.5 ) ... – Valmond Jun 29 '11 at 18:35
  • @Valmond: Why should it be 0.5? It will change as you move across the rendered image; that's what bilinear filtering _does_. – Nicol Bolas Jun 29 '11 at 18:39
  • well for me bilinear filtering makes a average between four values (x,y ; x+dx,y ; x,y+dy, x+dx,y+dy where usually dx & dy are one texel 'away' in the texture). In my shaders I sample a first spot (say 0.2674 , 0.3), then I sample spots one texel away in x,y directions (as you do) and then I do the average ( sum/4). – Valmond Jun 29 '11 at 18:46
2

As far as I can tell you're not quantiting the texture coordinate before sampling the texture. This could very well be the reason for those lines to appear because your input texture coordinate comes to lie on exactly the border between texels there. So you need to quantize uv to texel centers first. Keep in mind that texture coordinates 0 and 1 are not at texel centers but defines the borders of a grid, where the grid cells are the texels and texel centers are at (texel_n + 0.5) / texture_dim, or you use the GLSL function texelFetch available GLSL 1.30 and later.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Thank's I moved `uv`, offset `-f*texel_size` ... and it's ok. I was naively hoping that `glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);` was applying the textel color in the complete cell ... I understand there is some choice to make on the grid border (0 and 1), but it should be deterministic, right ? – Thomas Vincent Jun 30 '11 at 15:18