0

I'm trying to generate a 2D Gaussian kernel in MatLab. I found two ways to do it.
1. Using mvnpdf

mu = [0 0];
sigma = 1.4;
sigma_mat = 1/sigma^2 * eye(2);
x1 = -3:1:3;
x2 = x1;
[X1,X2] = meshgrid(x1,x2);
G = mvnpdf([X1(:) X2(:)],mu,sigma_mat);
F = reshape(G,length(x2),length(x1));

This gives the below matrix

0    0    0    0    0    0    0
0    0    0    0.01 0    0    0
0    0    0.04 0.12 0.04 0    0
0    0.01 0.12 0.31 0.12 0.01 0
0    0    0.04 0.12 0.04 0    0
0    0    0    0.01 0    0    0
0    0    0    0    0    0    0
  1. Using fspecial('gaussian')

f = fspecial('gaussian', [7,7], 1.4);

This gives the matrix as

0.00    0.00    0.01    0.01    0.01    0.00    0.00
0.00    0.01    0.02    0.03    0.02    0.01    0.00
0.01    0.02    0.05    0.06    0.05    0.02    0.01
0.01    0.03    0.06    0.08    0.06    0.03    0.01
0.01    0.02    0.05    0.06    0.05    0.02    0.01
0.00    0.01    0.02    0.03    0.02    0.01    0.00
0.00    0.00    0.01    0.01    0.01    0.00    0.00

What is the difference between these 2 functions? Why are they giving different outputs?
Thanks!

Edit 1: As Cris Luengo rightly pointed out, there was an error in sigma_mat. It should be

sigma_mat = sigma^2 * eye(2);

Even after that, there are some minor differences in decimal points.

Nagabhushan S N
  • 6,407
  • 8
  • 44
  • 87
  • Oh! You're right. It should be `sigma_mat = sigma^2 * eye(2);` sigma_mat has to be covariance matrix. I missed noting that mvnpdf takes inverse of sigma_mat. Even after that, there are some minor differences in decimal points. Any idea why? I'll update the question in a while – Nagabhushan S N Jan 23 '19 at 17:10
  • Perfect. Please post these both comments as answer. I'll accept it. Thanks :) – Nagabhushan S N Jan 23 '19 at 17:19

1 Answers1

1

The SIGMA input parameter to mvnpdf should be

sigma_mat = sigma^2 * eye(2);

Even with the same sigmas, the two matrices generated are not the same. fspecial ensures that sum(f(:))==1. Since you're cutting off the tails, this normalization is slightly different from that of the PDF of a normal distribution. You'll notice larger differences when reducing sigma (because of increased loss of information with sampling) and when reducing the size of the output matrix (because this cuts off more of the tails). For large sigma and large output matrix size the differences should be very small.

The reason fspecial normalizes this way is because the output is meant as a convolution kernel. When applying a smoothing filer, the filter weights should sum up to 1 to avoid the average image intensity changing. If you intend to use the generated kernel for image processing, use fspecial or normalize the output of mvnpdf.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120