I have read in different post on stackoverflow and in the C# documentation, that converting long
(or any other data type representing a number) to double
loses precision. This is quite obvious due to the representation of floating point numbers.
My question is, how big is the loss of precision if I convert a larger number to double
? Do I have to expect differences larger than +/- X ?
The reason I would like to know this, is that I have to deal with a continuous counter which is a long
. This value is read by my application as string
, needs to be cast and has to be divided by e.g. 10 or some other small number and is then processed further.
Would decimal
be more appropriate for this task?