I am writing some code to do some math (for a research project) I am not sure what level of precision I am going to need, or how much of a difference rounding errors could introduce in my results.
For example one thing I want to do is calculate the area of the surface of parts of 3d models by adding up the area of all the triangles. On a simple model with a few hundred triangles this might work fine but on a more complicated model with tens of thousands of triangle this could start to introduce a lot of rounding errors that could accumulate.
Can anyone recommend a strategy for me to get ride of the rounding errors or just a method to track the size of the rounding error. For example can I use the machine epsilon value and figure out the amount to error that may have been introduced at each stage of a calculation, and keep a running total of the possible accumulative error?
I would like to test some code using the different types. I know a float will give me 7 digits, a double 15-16 digits, and a decimal 28-29 digits. Is there a way to write my methods once and infer the type instead of writing a version for each number type?