The differences between your lines of codes are not about conversions, some of them are totally different things and the values are not the same.
1. float float1 = integer1 / (5 * integer2);
5 * interger2
gives an int
, int
divides int
gives an int
, and you assign the int
value to a float
variable, using implicit conversion because int
has a small range than float
. float float1 = 1 / (5 * 2)
, you will get a System.Single
0 as the result.
2. var float2 = integer1 / (5.0 * integer2);
5.0
is essentially 5.0d
, so the type of float2
is System.Double
, var float2 = 1 / (5.0 * 2)
you will get a System.Double
0.1 as the result.
3. var float3 = integer1 / (5F * integer2);
Using the same values above you will get a System.Single
0.1, which is probably what you want.
4. var float4 = (float)integer1 / (5 * integer2);
You will get the same as item 3. The difference is item3 is int divides by float
while item4 is float divides by int
.
5. var float5 = integer1 / (float) (5 * integer2);
6. var float6 = integer1 / ((float) 5 * integer2);
7. var float7 = integer1 / (5 * (float) integer2);
The three are almost the same as item 3, it calculates an int divides by float
, just different ways to build the divider.
8. var float8 = Convert.ToDecimal(integer1 / (5 * integer2));
9. var float9 = integer1 / Convert.ToDecimal(5 * integer2);
The two will give you System.Decimal
values, which has a higher precision. Item8 has the same issue as item1, you will get 0 because the parameter of Convert.ToDecimal
is a int
0.