In general, you are not required to cast operands when performing arithmetic. C has a number of rules for automatically converting operands, and they serve well in many situations.
In many current C implementations, float
is not more precise than int
. int
is commonly 32 bits, and float
has 24 bits for the significand (fraction portion of the floating-point number), along with about eight for the exponent and one for the sign. This gives float
wider range but less precision. The conversion rules give a ranking of types used for the conversions, but it is not strictly from less precise to more precise.
The automatic conversions do not serve all situations, and C programmers need to become familiar with the rules so they know when to add casts. These include:
- When the result is not representable, or is not representable with desired accuracy, in the default type.
- When the automatic conversions would cause errors in one of the operands.
An example of 1 is when we want to divide two integers and get a floating-point result:
float x = 1/3; // Wrong, integer division is performed, yielding zero, but we want (approximately) ⅓.
float x = (float) 1 / 3; // Right, convert at least one operand to float (or use 1.f, a float constant).
Another example is when two integers might overflow:
int x = Some large integer;
int y = Some large integer;
long z = x*y; // Wrong, result may overflow.
long z = (long) x * y; // Possibly right, long may be wide enough to represent product.
An example of 2 is when conversion from int
to float
may lose precision:
float x = 2;
int y = 123456789;
double z = x*y; // Wrong, converting 123456789 to float loses precision and produces 123456792 in many C implementations.
double z = x * (double) y; // Right, double has enough precision in many C implementations.