In c#, you can define a number literal as either int or double
Double var1 = 56.1;
int var2 = 51;
These are the default types the literals are assigned. However, the game engine I'm working on uses floats for position, rotation, etc. When the float is assigned a literal double, ie, float varFloat = 75.4;
the compiler throws an error saying that the double literal needs to be a float, which is correct. So one needs to turn the double literal into a float ie. float varFloat = 75.4f;
. However, when given an int literal, the int is implicitly converted to a float. Ie,
float varFloat = 44; // This is fine.
My question is is the compiler smart enough to realize that 44 should be a float literal? If not, that means that every time the literal is accessed, it's also performing a conversion. In most cases, this really doesn't matter. But with high performance code, it could potentially become an issue (even if it's a minor one) if ints are used all over the place instead of floats. There is no way as far as I know to change these literals into floats without going through lines of source code which really isn't time well spent at all.
So, does the compiler convert the int literal into a float literal? If not, what can be done about this waste of processing power other then trying to avoid it?