14

I am using WebGL to resize images clientside very quickly within an app I am working on. I have written a GLSL shader that performs simple bilinear filtering on the images that I am downsizing.

It works fine for the most part but there are many occasions where the resize is huge e.g. from a 2048x2048 image down to 110x110 in order to generate a thumbnail. In these instances the quality is poor and far too blurry.

My current GLSL shader is as follows:

uniform float textureSizeWidth;\
uniform float textureSizeHeight;\
uniform float texelSizeX;\
uniform float texelSizeY;\
varying mediump vec2 texCoord;\
uniform sampler2D texture;\
\
vec4 tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i )\
{\
    vec4 p0q0 = texture2D(textureSampler_i, texCoord_i);\
    vec4 p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0));\
\
    vec4 p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY));\
    vec4 p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY));\
\
    float a = fract( texCoord_i.x * textureSizeWidth );\
\
    vec4 pInterp_q0 = mix( p0q0, p1q0, a );\
    vec4 pInterp_q1 = mix( p0q1, p1q1, a );\
\
    float b = fract( texCoord_i.y * textureSizeHeight );\
    return mix( pInterp_q0, pInterp_q1, b );\
}\
void main() { \
\
    gl_FragColor = tex2DBiLinear(texture,texCoord);\
}');

TexelsizeX and TexelsizeY are simply (1.0 / texture width) and height respectively...

I would like to implement a higher quality filtering technique, ideally a [Lancosz][1] filter which should produce far better results but I cannot seem to get my head around how to implement the algorithm with GLSL as I am very new to WebGL and GLSL in general.

Could anybody point me in the right direction?

Thanks in advance.

gordyr
  • 6,078
  • 14
  • 65
  • 123

1 Answers1

24

If you're looking for Lanczos resampling, the following is the shader program I use in my open source GPUImage library:

Vertex shader:

 attribute vec4 position;
 attribute vec2 inputTextureCoordinate;

 uniform float texelWidthOffset;
 uniform float texelHeightOffset;

 varying vec2 centerTextureCoordinate;
 varying vec2 oneStepLeftTextureCoordinate;
 varying vec2 twoStepsLeftTextureCoordinate;
 varying vec2 threeStepsLeftTextureCoordinate;
 varying vec2 fourStepsLeftTextureCoordinate;
 varying vec2 oneStepRightTextureCoordinate;
 varying vec2 twoStepsRightTextureCoordinate;
 varying vec2 threeStepsRightTextureCoordinate;
 varying vec2 fourStepsRightTextureCoordinate;

 void main()
 {
     gl_Position = position;

     vec2 firstOffset = vec2(texelWidthOffset, texelHeightOffset);
     vec2 secondOffset = vec2(2.0 * texelWidthOffset, 2.0 * texelHeightOffset);
     vec2 thirdOffset = vec2(3.0 * texelWidthOffset, 3.0 * texelHeightOffset);
     vec2 fourthOffset = vec2(4.0 * texelWidthOffset, 4.0 * texelHeightOffset);

     centerTextureCoordinate = inputTextureCoordinate;
     oneStepLeftTextureCoordinate = inputTextureCoordinate - firstOffset;
     twoStepsLeftTextureCoordinate = inputTextureCoordinate - secondOffset;
     threeStepsLeftTextureCoordinate = inputTextureCoordinate - thirdOffset;
     fourStepsLeftTextureCoordinate = inputTextureCoordinate - fourthOffset;
     oneStepRightTextureCoordinate = inputTextureCoordinate + firstOffset;
     twoStepsRightTextureCoordinate = inputTextureCoordinate + secondOffset;
     threeStepsRightTextureCoordinate = inputTextureCoordinate + thirdOffset;
     fourStepsRightTextureCoordinate = inputTextureCoordinate + fourthOffset;
 }

Fragment shader:

 precision highp float;

 uniform sampler2D inputImageTexture;

 varying vec2 centerTextureCoordinate;
 varying vec2 oneStepLeftTextureCoordinate;
 varying vec2 twoStepsLeftTextureCoordinate;
 varying vec2 threeStepsLeftTextureCoordinate;
 varying vec2 fourStepsLeftTextureCoordinate;
 varying vec2 oneStepRightTextureCoordinate;
 varying vec2 twoStepsRightTextureCoordinate;
 varying vec2 threeStepsRightTextureCoordinate;
 varying vec2 fourStepsRightTextureCoordinate;

 // sinc(x) * sinc(x/a) = (a * sin(pi * x) * sin(pi * x / a)) / (pi^2 * x^2)
 // Assuming a Lanczos constant of 2.0, and scaling values to max out at x = +/- 1.5

 void main()
 {
     lowp vec4 fragmentColor = texture2D(inputImageTexture, centerTextureCoordinate) * 0.38026;

     fragmentColor += texture2D(inputImageTexture, oneStepLeftTextureCoordinate) * 0.27667;
     fragmentColor += texture2D(inputImageTexture, oneStepRightTextureCoordinate) * 0.27667;

     fragmentColor += texture2D(inputImageTexture, twoStepsLeftTextureCoordinate) * 0.08074;
     fragmentColor += texture2D(inputImageTexture, twoStepsRightTextureCoordinate) * 0.08074;

     fragmentColor += texture2D(inputImageTexture, threeStepsLeftTextureCoordinate) * -0.02612;
     fragmentColor += texture2D(inputImageTexture, threeStepsRightTextureCoordinate) * -0.02612;

     fragmentColor += texture2D(inputImageTexture, fourStepsLeftTextureCoordinate) * -0.02143;
     fragmentColor += texture2D(inputImageTexture, fourStepsRightTextureCoordinate) * -0.02143;

     gl_FragColor = fragmentColor;
 }

This is applied in two passes, with the first performing a horizontal downsampling and the second a vertical downsampling. The texelWidthOffset and texelHeightOffset uniforms are alternately set to 0.0 and the width fraction or height fraction of a single pixel in the image.

I hard-calculate the texel offsets in the vertex shader because this avoids dependent texture reads on the mobile devices I'm targeting with this, leading to significantly better performance there. It is a little verbose, though.

Results from this Lanczos resampling:

Lanczos

Normal bilinear downsampling:

Bilinear

Nearest-neighbor downsampling:

Nearest-neighbor

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • A beautifully constructed answer. Thank you. I should be able to get there from the code you posted. Cheers! – gordyr Jan 16 '13 at 21:32
  • Just to let you know I know have this working perfectly and the results are beautiful. Strangely I had to set the Texeloffsets to (1.0 / (destinationwidth*3)) and (1.0 / (destinationheight*3)) for best results. I'm not sure I understand why but using the standard width/height produced a very blurry image. Regardless it's fabulous now. Huge thanks! – gordyr Jan 16 '13 at 22:55
  • @gordyr - Good to hear. You mean that you needed to use texelWidthOffset = 3.0 / (image width in pixels) or texelWidthOffset = 1.0 / (3.0 * (image width in pixels))? I generated the above images with texelWidthOffset = 1.0 / (image width in pixels) and texelHeightOffset = 1.0 / (image height in pixels), but if a factor of three works for you, go with it. – Brad Larson Jan 16 '13 at 23:02
  • Im actually loading in the fullsize image as the texture... Then performing the processing with the texel offset set as 1.0 / (3.0 * (resized image width)) ... then finally actually resizing the image down. The results are spectacular. I have played with other texel offsets based on the actual image size and the results are always source image resolution dependent. This way the quality is smooth and sharp whatever the source. Obviously this is on an html5 canvas so using this method might not translate to other devices. But for my purposes it is perfect. – gordyr Jan 16 '13 at 23:12
  • @BradLarson, 2 questions about your code. 1) What happens on the edge positions of the image? Convolvers for CPU, precalculating filters, do appropriate weight compensations. 2) You replace Sinc function with ladder (only one value for each window sub-interval). That seems to be not the same as Sinc. Is direct Sinc calculation on GPU slow? – Vitaly Dec 17 '14 at 05:12
  • 1
    @Vitaly - Currently, edges are clamped (values are repeated as you sample beyond the edge). I notice little reduction in image quality as a result, but this is different from dynamic changes in weighting. It's a tradeoff for performance. I'm precalculating the sinc() values for the weights here, again as an approximation to keep performance reasonable. Trig functions are extremely slow to calculate on the GPU compared to simple arithmetic. – Brad Larson Dec 19 '14 at 02:23
  • @BradLarson one more question - is it ok to use pre-build filters, stored in second texture (instead of SINC approximation)? Or this solution will aslo have problems/limitations with GPU + webgl? – Vitaly Feb 07 '15 at 00:17
  • What is unclear to me and for what i m looking help, is that this shader blur the image. But then how do i make it smaller? – AndreaBogazzi Oct 28 '17 at 19:07
  • to restrict my drawing area should i use gl.viewport, the vertex shader with a scale matrix on gl_position or a smaller destination texture? – AndreaBogazzi Oct 28 '17 at 19:15
  • I spent the weekend over webgl lanzcos resizing. This filter with 4 taps cannot resize more than a certain amount. I had to build, and i m happy with the result, a dynamic shader generation to obtain a woring 3 lobes lanzcos. Taps go up to 120 sometimes, i belive performances are not the best, but still is very fast in the browser – AndreaBogazzi Oct 30 '17 at 11:08