4

If I have a simple program written in C++ like HelloWorld, and then I compile it in a machine of 32 bit and 64 bit, I get two different binaries doing the same but they are different machine code and only 32 bit binary will be able to run on 32 or 64 bit machine.

In both case I don't have any benefits because the source code is the same and they do the same. This makes me to think the all software packages of some Linux distro written for 32 bit could be ported to 64 bit machine without to do anything change. Then, what do I get ? Any benefits?

Is there any example of code in C/C++ that I can do some in 64-bit that I can't do in 32-bit ?

For example, Google Chrome right now is unsupported in 32 bit, but not in 64 bit. Which could be the reason?

rvillablanca
  • 1,606
  • 3
  • 21
  • 34
  • @JamesRoot that accesses out of bounds in 64-bit – M.M Mar 17 '16 at 05:10
  • Every day I deal with the pain of not being able to port a large, mature application to 64-bit for a number of reasons. Unlike your "Hello world" program, this program contains roughly half a million lines of code, written over a period of 10 years. You are incorrect in your assumption that code can simply be "ported to 64-bit" without consequence. – paddy Mar 17 '16 at 05:13
  • [64-bit Performance Advantages](http://stackoverflow.com/q/3343812/995714), [What are the advantages of a 64-bit processor?](http://stackoverflow.com/q/607322/995714) – phuclv Mar 17 '16 at 05:33
  • 1
    The reason Chrome is only supported on one of them is so Google doesn't have to bother making sure they both work. – user253751 Mar 17 '16 at 06:11
  • `Is there any example of code in C/C++ that I can do some in 64-bit that I can't do in 32-bit?` No. All computer architectures are Turing complete and can do anything other Turing-complete systems can do, given enough time and memory – phuclv Mar 17 '16 at 09:58

3 Answers3

6

There are too many differences (memory handling, CPU architecture, bus, etc.) between a 32-bit and 64-bit CPU to get into here, but the biggest and most obvious difference is addressable memory (i.e. how far your pointer can go).

Take the following code for example:

#include <iostream>

int main(int argc, char* argv[])
{
    // this is just to demonstrate 32 vs. 64
    int* x = (int*)0xFFFFFFFFFFFFFFFF;
    int* y = (int*)0x00000000FFFFFFFF;
    std::cout << std::hex << 
        "&x = " << x << std::endl <<
        "&y = " << y << std::endl;
    if (y == x) {
        std::cout << "RIGHT!" << std::endl;
    } else {
        std::cout << "WRONG!" << std::endl;
    }
    return 0;
}

Q: What do you think will be printed on a 32-bit machine vs. a 64-bit machine?

A: A very different result!

As you can see from the above code, if I expect x to equal y and test this on a 32-bit machine, then things will go as I expect and my code will run fine and everyone's happy! But then I pass this code to a friend who has to recompile for their 64-bit machine and they are most certainly not happy since all they see is WRONG!

I won't go deep into the other differences of 32 vs. 64 (like device and system drivers, or kernel modules) since it's beyond the scope of this forum, but hopefully the above code can illustrate why building for a 32-bit machine and then re-compiling for a 64-bit machine isn't as cut and dry as one would initially think.

So to answer some of your questions more directly:

Then, what do I get ? Any benefits?

It depends on what you're trying to do. If you have a program that will never reach the limits of 32-bit CPU's then you won't necessarily see any benefits of building for a 64-bit CPU, and depending on the CPU and OS, you might actually see a degradation in performance (as was the case in the early days of 32-bit emulation on 64-bit CPU's), but with modern cores and OS's, this is largely a non-issue for the "average" program (save the fact that you can't access more than 4GB of RAM).

However, if you have a project that would consume massive amounts of memory (like a web-browser), or need to do calculations for very large sets of numbers (like 3D calculations), then you will most certainly see a benefit in the fact that you can address more than 4GB of RAM or larger resolution numbers for your 64-bit build.

It just depends on the scope of your project and what architectures you're willing to support.

For example, Google Chrome right now is unsupported in 32 bit, but not in 64 bit. Which could be the reason?

Only the Chrome team can specifically tell you why to this one, but my guess has to do with a couple of reasons.

First is the fact that 32-bit CPU's are largely dying out and thus killing off support for a dying architecture means they can focus on improving the 64-bit architecture.

The second reason probably has to do with memory; the 64-bit version of Chrome can access more than 4GB of RAM (assuming the system has more than that) and thus a 64-bit machine with 8GB of RAM would be able to handle more browser sessions and potentially be more responsive (to the individual sessions) than on a 32-bit machine.

Additionally, Wiki has a pretty good page that details more of the 32-bit to 64-bit transition and the various considerations, should you be interested in diving in more on the differences.

Hope that can help.

txtechhelp
  • 6,625
  • 1
  • 30
  • 39
1

64-bit calculations can be faster than 32-bit on x64 platforms. 64-bit program can also can use more RAM (not limited by 4 Gb).

Andrei R.
  • 2,374
  • 1
  • 13
  • 27
1

In most 'significant' programs, the memory used for data far exceeds the memory used for the code. A 32-bit 'Hello World' benefits from only needing 32-bit pointers, marginally better code density, etc. But in reality, data sets - and were talking games nowadays - need access beyond 4GB limits.

You probably wouldn't even buy a new desktop-class graphics card with 4GB today. If you're not happy with integrated graphics, you probably wouldn't get a GPU with less than 8GB on board.

There's an effort to provide Linux kernel and userland support for an x32 ABI, which takes advantage of the x86-64 ISA, but essentially uses 32-bit pointers; and the theoretical 4GB data limit is more than enough for many programs. But the speed advantages due to code density (caching) are not convincing, and don't justify the effort in yet another ABI (and parallel libraries / loaders) to support it, as well as the x86-64 and IA32 ABIs. Not to mention the code maintenance. The cost-benefit ratio just doesn't add up.

It should be noted that x86 instruction encodings have different byte length encodings, which was considered arcane when early RISC architectures looked set to rule the world, but has actually worked in its favour.


A more successful implementation of this idea was the N32 ABI for the MIPS (RISC) architecture. Particularly for late 90s SGI hardware. PowerPC64 can also use 64-bit instructions in a 32 bit mode. But PPC was designed from its inception to be extensible to 64 bits, IIRC, even though initial implementations only supported a 32-bit ISA.


This makes me to think the all software packages of some Linux distro written for 32 bit could be ported to 64 bit machine without to do anything change.

Experience actually revealed the opposite. People had been making assertions about integral type sizes, pointer arithmetic, etc., for ages, and this caused a lot of headaches. By which I mean bugs. There's more emphasis on portable types (C99's intX_t) and an awareness of ABI issues when it comes to things like long int for example.

Brett Hale
  • 21,653
  • 2
  • 61
  • 90