Here is an image showing an output of the same program in intel and then in ARM:
http://screencast.com/t/1eA64D4rF
Both show output from reading a binary file with the numbers in the first column being of double precision floating point format. Why is it that I am unable to obtain the correct result (like in the case of intel -- 41784.998495, 41784.998623) vs (-8.1974E+204f etc) on the ARM environment?
The arm processor I am using is:
Processor : ARM926EJ-Sid(wb) rev 0 (v5l)
BogoMIPS : 331.77
Features : swp half thumb fastmult edsp java
CPU implementer : 0x41
CPU architecture: 5TEJ
CPU variant : 0x0
CPU part : 0x926
CPU revision : 0
Cache type : write-back
Cache clean : cp15 c7 ops
Cache lockdown : format C
Cache format : Harvard
I size : 32768
I assoc : 1
I line length : 32
I sets : 1024
D size : 32768
D assoc : 1
D line length : 32
D sets : 1024
Hardware : MV-88fxx81
Revision : 0000
Serial : 0000000000000000
My compile option on the ARM: g++ -Wall SC_SCID.cpp
How can I read the double precision type correctly on this processor? Are there any compiler options that I need to enable to correctly process double precision numbers on the ARM?