float x = 19.2F;
I understand that without F
the variable is assumed to be a double
by default. But isn't that a bit redundant since we're declaring the variable to be a float
in the first place?
Can someone explain why this is the case?
float x = 19.2F;
I understand that without F
the variable is assumed to be a double
by default. But isn't that a bit redundant since we're declaring the variable to be a float
in the first place?
Can someone explain why this is the case?
It can be somehow seen as being redundant. But it is for your safety. Simply speaking, there is no implicit conversion between float
and double
, because you can lose information (accuracy). And this CAN be made by mistake so it is the language designers decision to make you explicitly specify that you are aware of that conversion.
Konrad is right.
If you see this as "duplicity", you can use eg. var
:
var myFloat = 23f;
Tada, the "duplicit" float
is gone :))