2

See EDIT

I tried to implement the gaussian blur algorithm on my own in MATLAB instead of using the built-in algorithm for reasons of understanding it in detail.

I found an interesting implementation and someone has already asked how to code that kind of algorithm. So that is not the business.

Furthermore I use the following formula to calculate the standard deviation of given radius like GIMP does:

stdDeviation = sqrt(-(radius^2) / (2*log10(1 / 255)));

My algorithm works for small values of radius (e.g. 3,5,7) without any problems (you are not able to see the difference at least). If I try to blur an image with the radius of 21, the output is the following:

My result

Compared to GIMP´s / MATLAB´s imgaussfilt(A,sigma) output:

Matlab´s / GIMP´s result

Obviously the algorithms compute not the same (or at least similar) output image. What does GIMP / MATLAB´s imgaussfilt(A,sigma) do apart from that?

The image´s border can be neglected. I'm aware of that problem. But I do not understand the origin of the 'strange pixel stripes' in my output image.

For reasons of completeness, here is the source code:

function y = gaussianBlurSepTest(inputImage)
% radius in pixel; RADIUS MUST BE ODD! 
radius = 21;
% approximate value for standard deviation
stdDeviation = sqrt(-(radius^2) / (2*log10(1 / 255)));

ind = -floor(radius/2):floor(radius/2);
[X, Y] = meshgrid(ind, ind);
h = exp(-(X.^2 + Y.^2) / (2*stdDeviation*stdDeviation));
h = h / sum(h(:));

redChannel = inputImage(:,:,1);
greenChannel = inputImage(:,:,2);
blueChannel = inputImage(:,:,3);

redBlurred = conv2(redChannel, h);
greenBlurred = conv2(greenChannel, h);
blueBlurred = conv2(blueChannel, h);

y = cat(3, uint8(redBlurred), uint8(greenBlurred), uint8(blueBlurred));

EDIT:

For the sake of completeness and to help others: I applied erfan´s modifications. The result is much better now, but one can still see an obvious difference to gimp´s calculations. GIMP´s result looks 'smoother'.

The implemented algorithm:

function y = gaussianBlurSepTest(inputImage)
radius = 21;
stdDeviation = sqrt(-(radius^2) / (2*log10(1 / 255)));
ind = -floor(radius/2):floor(radius/2);
[X, Y] = meshgrid(ind, ind);
[~, R] = cart2pol(X, Y);  % Here R is defined
h = exp(-R.^2 / (2*stdDeviation*stdDeviation));
h = h / sum(h(:));
h(R > radius/2) = 0;
h = h / sum(h(:));
redChannel = inputImage(:,:,1);
greenChannel = inputImage(:,:,2);
blueChannel = inputImage(:,:,3);
redBlurred = conv2(redChannel, h);
greenBlurred = conv2(greenChannel, h);
blueBlurred = conv2(blueChannel, h);
y = cat(3, uint8(redBlurred), uint8(greenBlurred), uint8(blueBlurred));

The result: The implemented algorithm

GIMP´s result: GIMP

In terms of answering the question completely and to help others with the same question I think it might be useful to ask for the origin of the differences.

Thank you!

Community
  • 1
  • 1

2 Answers2

1

h is responsible for the appeared horizontal and vertical stripes. Although you have defined h with angular symmetry, as seen in the following plot, its boundaries break this symmetry:

h

To make things right, you can truncate h on the correct radius:

truncated h

I applied the modification to your function and now it should give much better results:

function y = gaussianBlurSepTest(inputImage, radius)
stdDeviation = sqrt(-(radius^2) / (2*log10(1 / 255)));
ind = -floor(radius/2):floor(radius/2);
[X, Y] = meshgrid(ind, ind);
[~, R] = cart2pol(X, Y);  % Here R is defined
h = exp(-R.^2 / (2*stdDeviation*stdDeviation));
h = h / sum(h(:));
h(R > radius/2) = 0;  % And here h is truncated. The rest is the same.

Here is my test. My image:

enter image description here

After your gaussianBlurSepTest (radius = 35):

enter image description here

After the modified function:

enter image description here

Note: The output gets darker a bit. If it is an issue, you can re-normalize stdDeviation or widen your meshgrid.

Erfan
  • 1,897
  • 13
  • 23
  • I understand my mistake now. Thank you! That will do the job for me! I´ll mark the answer as accecpted. For the sake of completeness and to help others I think it is useful to ask how one can improve the algorithm apart from that. See my edit. – Janosh Castle Sep 07 '16 at 07:03
  • I believe with increasing the size of your mesh (and therefore increasing the range of `h`) you could get much better results. But with the cost of much more computation time. Try it with doubling the range of the meshgrid. – Erfan Sep 07 '16 at 16:43
0

I think the culprit is conv2 instruction and it is because your image type is uint8 or uint16, when you are filtering your image you should use float or double type. Your filter size is so high, that the resultant of application of coefficients window on pixels exceeds 8 bits or 16 bits integer range (255 or 65535). I suggest to try to cast your image to double before application of your filter and after the application of the filter cast it back to uint8 as below:

   Output=gaussianBlurSepTest(double(inputImage));
   Output=uint8(Output);

Good luck.

Mehdi
  • 293
  • 4
  • 13