Assuming 32-bit values (int32_t, float), they are stored in memory as follows:
// 255
int: 11111111 00000000 00000000 00000000 (big endian)
int: 00000000 00000000 00000000 11111111 (little endian)
float: 0 11111111 000000000000000000000
By this point it's fairly obvious that the memory itself is arranged differently, depending on the interpreted type.
Further assuming a standard C-style cast, how is this achieved? I usually work with x86(_64) and ARMHF CPUs, but I'm not familiar with their respective assembly languages or the way the CPUs are organised internally, so please excuse if this would be answered fairly simply by knowing the internals of these CPUs. Primarily of interest, are how C/++ and C# handle this cast.
- Does the compiler generate instructions which interpret the sign-bit and the exponent portion and just converts them over to a memory structure representing an integer, or is there some magic going on in the background?
- Do x86_64 and ARMHF have built-in instructions to handle this sort of thing?
- Or: does a C-style cast simply copy the memory and it's up to the runtime to interpret whatever value pops out (seems unlikely, but I may be mistaken)?
The suggested posts Why are floating point numbers inaccurate? and Why can't decimal numbers be represented exactly in binary? do help with understanding basic concepts of floating-point math, but do not answer this question.