Recently I have a debate with my colleague on image gradient operation.
Normally, the image gradient is defined as:
dI_dx(j,k) = I(j,k+1) - I(j,k) # x partial derivative of image
dI_dy(j,k) = I(j+1,k) - I(j,k) # y partial derivative of image
For x partial derivative of image, this operation can be represented by a 1x2 filter array:
[1 -1]
But there is also another definition:
dI_dx(j,k) = I(j,k+1) - I(j,k-1)
=> [1 0 -1]
(filter array)
So my colleague asked: What is the difference between them, and why is the latter 1x3 filter more often used than the 1x2 filter?
We have discussed some possible reasons:
1x3 sampling is more robust than 1x2
My colleague : No, they both sample 2 pixels for each image gradient pixel, the probability that a noise occurs on the sampled pixels is the same among these filters.
1x3 is smoother than 1x2
My colleague : No, the definition of the 1x2 and 1x3 filters are not smoothed at all. The Sobel filter is the one smoothed by a gaussian...
Extended question: Does image gradient's spatial filter kernel have so called "window size"?
By the way, I and my colleague are not persuaded by the following reference webpage...
http://www.cis.rit.edu/people/faculty/rhody/EdgeDetection.htm