My intuition says that a High Dynamic Range image would provide more stable features and edges for various image segmentation and other low level vision algorithms to work with - but then it could go the other way with the larger number of bits leading to sparser features as well as the extra cost involved in generating HDR if it needs to be derived using exposure fusion or such instead of from hardware.
Can anyone point out any research on the topic, ideally it would be good to find out if there has been a comparison study for various machine vision techniques using Standard and High dynamic range images.