0

I am converting a string from hex to decimal. The problem is that in Visual Studio compiler the conversion returns a wrong value. However when I compile the same code in a Mac at the terminal using the g++ compiler, the value is returned correctly.

Why this is happening?

#include <string>
#include <iostream>
using namespace std;

int main()
{
    string hex = "412ce69800";

    unsigned long n = strtoul( hex.c_str(), nullptr, 16 ); 

    cout<<"The value to convert is: "<<hex<<" hex\n\n";
    cout<<"The converted value is: "<<n<<" dec\n\n";
    cout<<"The converted value should be: "<<"279926183936 dec\n\n";

    return 0;
}

output:

output file

phuclv
  • 37,963
  • 15
  • 156
  • 475
Honu
  • 1
  • Note that VS is an IDE, **not** compiler. It uses MS's `cl.exe`compiler internally and you can also run `cl` from command line just like gcc – phuclv Jun 20 '14 at 06:20

1 Answers1

2

Because in Windows long is a 32-bit type, unlike most Unix/Linux implementations which use LP64 memory model in which long is 64 bits. The number 412ce69800 has 39 bits and inherently it can't be stored in a 32-bit type. Read compiler warnings and you'll know the issue immediately

C standard only requires long to have at least 32 bits. C99 added a new long long type with at least 64 bits, and that's guaranteed in all platforms. So if your value is in 64-bit type's range, use unsigned long long or uint64_t/uint_least64_t and strtoull instead to get the correct value.

phuclv
  • 37,963
  • 15
  • 156
  • 475
  • I think it's compiler-dependent, not OS-dependent. In addition, I'm not sure that you're solution will work, since `strtoul` returns `unsigned long` in any case. – barak manos Jun 20 '14 at 06:22
  • @barakmanos I've edited to use strtoull several minutes ago, and all compilers I've known conform to the OS's type – phuclv Jun 20 '14 at 06:25
  • I've had the chance to work on embedded SW which was running "standalone" (i.e., no OS, no scheduling, no virtual memory, processes, threads, etc). So I can't see how the size of types there was determined by anything else other than the compiler. If anything, the compiler would conform to the CPU architecture. – barak manos Jun 20 '14 at 06:32
  • @barakmanos of course the type is defined by the compiler. Maybe there are some weird compilers that have 18-bit int or 16-bit char on x86 but still fully conform to C standard, but I've never known such one like that. All the one I've known have the type size the same as the OS they target (if any), if there's no OS then they can have the size they want – phuclv Jun 20 '14 at 06:38
  • I admit I haven't fully investigated this, but intuitively, I would guess that **the OS itself**, which is designated to run over a specific processor, is built using the appropriate tool-chain (i.e., compiler and linker suitable for the specific processor at hand). So the correct way to look at it would be that type-size is determined by the compiler. Again, that's just what my intuition tells me. – barak manos Jun 20 '14 at 11:09
  • @barakmanos I don't mean your statement is wrong. Inherently the size depends on compiler. But most compilers target a specific OS uses the same type size like other compilers on that platform. For example although gcc has roots from Unix, when targetting Windows it also uses 32-bit long like any other Windows compiler for conformation, and when targetting 16 or 8-bit microcontrollers it uses 16-bit int. That's the OP's problem because different sizes cause the number to overflow on Windows – phuclv Jun 20 '14 at 13:46