I currently have some code where I have to normalize a vector of doubles (divide each element by the sum). When debugging, I see sometimes that the elements in the vector are all 0.0. If I then take the sum of the elements, I get either 0.0 or 4.322644347104e-314#DEN (which I recently found out was a denormalized number). I would like to prevent normalizing the vector for the cases when the sum is either 0.0 or a denormalized number. The only way I could think of handling these two cases is to check if the sum is less than 'epsilon', where epsilon is some small number (but I'm not sure how small to make epsilon).
I have 2 questions:
- What is the best way to take these cases into account?
- Is the value of the denormalized number machine dependent?