-2

I am writing some code to do some math (for a research project) I am not sure what level of precision I am going to need, or how much of a difference rounding errors could introduce in my results.

For example one thing I want to do is calculate the area of the surface of parts of 3d models by adding up the area of all the triangles. On a simple model with a few hundred triangles this might work fine but on a more complicated model with tens of thousands of triangle this could start to introduce a lot of rounding errors that could accumulate.

Can anyone recommend a strategy for me to get ride of the rounding errors or just a method to track the size of the rounding error. For example can I use the machine epsilon value and figure out the amount to error that may have been introduced at each stage of a calculation, and keep a running total of the possible accumulative error?

I would like to test some code using the different types. I know a float will give me 7 digits, a double 15-16 digits, and a decimal 28-29 digits. Is there a way to write my methods once and infer the type instead of writing a version for each number type?

user802599
  • 787
  • 2
  • 12
  • 35
  • `decimal` is too slow for large amounts of geometry. Use `double` for performance and higher tolerance of rounding errors. – Koby Duck Jan 25 '18 at 05:10

1 Answers1

0

If I understand correctly, you can use generics.

public T Calculate<T>(T input) where T : struct
{
    // Perform calculations here
}

Or you could use overloads.

public float Calculate(float input)
{
    // Perform calculations here
}

public double Calculate(double input)
{
    // Perform calculations here
}

public decimal Calculate(decimal input)
{
    // Perform calculations here
}

I can't tell which would work based on the details provided.

Jonathan Wood
  • 65,341
  • 71
  • 269
  • 466