0

I am trying to implement a real time blur on a SurfaceView or GlSurfaceView getting a camera feed and came across this: Grafika https://github.com/google/grafika

It has a feature that lets you apply a real time blur filter but the blur is not strong enough as demonstrated here: https://www.youtube.com/watch?v=kH9kCP2T5Gg

The Grafika class uses a The Grafika class uses a 3x3 filter kernel, which to my understanding is a floating point value that when edited a certain applies a desired effect to the View.

Here is the blur code:

'case CameraCaptureActivity.FILTER_BLUR:
    programType = Texture2dProgram.ProgramType.TEXTURE_EXT_FILT;
    kernel = new float[] {
        1f/16f, 2f/16f, 1f/16f,
        2f/16f, 4f/16f, 2f/16f,
        1f/16f, 2f/16f, 1f/16f };
     break;'

Does anyone have any idea on how to play with those numbers to strengthen the blur?

Sharpen has completely different fractions:

'kernel = new float[] {
                    0f, -1f, 0f,
                    -1f, 5f, -1f,
                    0f, -1f, 0f };'

and I cant quite figure out the pattern. Any help would be appreciated.

Steven Kz
  • 21
  • 3

1 Answers1

1

The basic idea is explained on the wikipedia image processing kernel page. The blur example there is the same as the one in Grafika (and shown in your question).

The idea is that the pixel at (x,y) is the weighted sum of the pixels around it, i.e.

[x-1,y-1]  [x,y-1]  [x+1,y-1]
[x-1,y]    [x,y]    [x+1,y]
[x-1,y+1]  [x,y+1]  [x+1,y+1]

To get the original image, we'd use weights of zero for everything but the current pixel:

0 0 0
0 1 0
0 0 0

We could get a simple blur by averaging all 9 pixels. If we just set a weight of 1 for every pixel like this:

1 1 1
1 1 1
1 1 1

then the value would overflow by a factor of 9 because we just added 9 pixels together -- some dark pixels would be very bright, but any moderately-bright pixels would just wash out. So we'd need to normalize the value by dividing by 9.

In the blur example shown in your question, we're using different weights for each pixel:

1 2 1
2 4 2
1 2 1

The total weight is 1+2+1+2+4+2+1+2+1 = 16, so we need to divide the result by 16. Division is relatively expensive -- remember we're doing this millions of times on a large image -- but we can avoid it entirely by just dividing the coefficients by 16.

For the sharpen example shown in your question, we give the current pixel a weight of 5, and subtract pixel values from four nearby pixels, again leaving a final weight of 1. If all of the pixels are roughly the same value, we'll compute 5 * pixel - 4 * pixel == pixel. If the nearby values are different -- indicating we're at some sort of edge -- then the new pixel value will tend to be very bright or very dark.

You can accomplish quite a bit with a 3x3 filter kernel, but in some cases you may need to use a larger one (see e.g. the 7x7 on the wikipedia gaussian blur page). This will increase your processing time, but fortunately GPUs are pretty good at doing lots of math in parallel.

fadden
  • 51,356
  • 5
  • 116
  • 166
  • I understand the solution now but still the matrix does not give any blurring, just like Steve I tried playing with the numbers but increasing the numbers without increasing the cof just increases the brightness which is expected, still not sure what to, please help and thanks so much for such nice work btw. – Mohammad Elsayed Dec 23 '22 at 23:00
  • can u take a look on my question if u have time? https://stackoverflow.com/questions/74969773/android-opengl-es-draw-an-image-on-texture-by-sending-all-vertices-of-the-image – Mohammad Elsayed Dec 31 '22 at 20:29