The problem I have is to create a sort of big integer library. I want to make it both cross platform and as fast as possible. This means that I should try to do math with as large data types as are natively supported on the system.
I don't actually want to know whether I am compiling for a 32 bit or 64 bit system; all I need is a way to create a 64 bit or 32 bit or whatever bit integer based on what is the largest available. I will be using sizeof to behave differently depending on what that is.
Here are some possible solutions and their problems:
Use sizeof(void*): This gives the size of a pointer to memory. It is possible (though unlikely) that a system may have larger pointers to memory than it is capable of doing math with or vice versa.
Always use long: While it is true that on several platforms long integers are either 4 bytes or 8 bytes depending on the architecture (my system is one such example), some compilers implement long integers as 4 bytes even on 64 bit systems.
Always use long long: On many 32 bit systems, this is a 64 bit integer, which may not be as efficient (though probably more efficient than whatever code I may be writing). The real problem with this is that it may not be supported at all on some architectures (such as the one powering my mp3 player).
To emphasize, my code does not care what the actual size of the integer is once it has been chosen (it relies on sizeof() for anything where the size matters). I just want it to choose the type of integer that will cause my code to be most efficient.