Full Disclaimer, this is my first attempt at edge detection so when responding assume I know about software but not necessarily about computer vision/related fields.
I am reading about various edge detection methods. A natural first step in many of these is to smooth out the image then proceeding. This seems reasonable because you don't want random noise to interfere with any higher level logic.
My question is, what useful information can be lost due to a Gaussian blur? (If any?)