When the compiler sees a numeric literal, it selects a type based upon the size of the number, punctuation marks, and suffix (if any), and then translates the the sequence of characters in it to that type; all of this is done without regard for what the compiler is going to do with the number. Once this is done, the compiler will only allow the number to be used as its own type, explicitly cast to another type, or in the two cases defined below implicitly converted to another type.
If the number is interpreted as any integer type (int
, long
, etc.) the compiler will allow it to be used to initialize any integer type in which the number is representable, as well as any binary or decimal floating-point type, without regard for whether or not the number can be represented precisely in that type.
If the number is type Single
[denoted by an f
suffix], the compiler will allow it to be used to initialize a Double
, without regard for whether the resulting Double
will accurately represent the literal with which the Single
was initialized.
Numeric literals of type Double
[including a decimal point, but with no suffix] or Decimal
[a "D" suffix not followed immediately by a plus or minus] cannot be used to initialize a variable of any other, even if the number would be representable precisely in the target type, or the result would be the target type's best representation of the numeric literal in question.
Note that conversions between type Decimal
and the other floating-point types (double
and float
) should be avoided whenever possible, since the conversion methods are not very accurate. While there are many double
values for which no exact Decimal
representation exists, there is a wide numeric range in which Decimal
values are more tightly packed than double
values. One might expect that converting a double
would choose the closest Decimal
value, or at least one of the Decimal
values which is between that number and the next higher or lower double
value, but the normal conversion methods do not always do so. In some cases the result may be off by a significant margin.
If you ever find yourself having to convert Double
to Decimal
, you're probably doing something wrong. While there are some operations which are available on Double
that are not available on Decimal
, the act of converting between the two types means whatever Decimal
result you end up with is apt to be less precise than if all computations had been done in Double`.