I am using following code to typecast from float to int. I always have float with up to 1 decimal point. First I multiply it with 10 and then typecast it to int
float temp1 = float.Parse(textBox.Text);
int temp = (int)(temp1*10);
for 25.3 I get 252 after typecasting but for 25.2, 25.4 I get correct output 252,254 respectively.
Now performing same operation little bit in different way gives correct output.
float temp1 = float.Parse(textBox.Text);
temp1 = temp1*10;
int temp = (int)temp1;
now for 25.3 I get 253. What is the reason for this because logically first method is also correct? I am using Visual Studio 2010.