1

I use Borland C++ Builder 2009 to build 32 bit executables.

I need to change a huge portion of my code since I need to change (at least) one variable of almost all objects to a 64 bit variable, and it is used a lot in calculations etc.

While doing this I'm often faced with the fact that for particular functions that are used the 64 bit value is not needed and will never be needed (e.g. buffer size limitations, or only using a sub-range that can never exceed a DWORD boundary etc.) and so then I wonder, should I change these routines as well or not.

Or in functions that do use the 64 bit variable input, change other function-scope variables to 64 bit as well, or leave them as is.

So I was wondering if a 32 bit application actually 'suffers' from the use of 64bit variables or not ? If this is significant or completely irrelevant ? In case of the former I would try to keep the DWORD values where possible for instance.

Peter
  • 1,334
  • 12
  • 28
  • 4
    This question appears to be off-topic because it is about requirements, architecture, and refactoring and is better suited for http://programmers.stackexchange.com/ – Captain Obvlious Oct 09 '14 at 14:41

2 Answers2

4

Manipulating values that are larger than registers normally needs more CPU-Cycles so yes this may have a large impact on performance but to be sure if that is relevant for your case you have to profile anyway.

TNA
  • 2,595
  • 1
  • 14
  • 19
  • an impact at most, as long as he isn't doing heavy integer-arithmetic and just doubled his integer-size everywhere. Aside from the hyperbole, good. – Deduplicator Oct 09 '14 at 14:33
  • I realize it may not have a noticeable impact on the performance of the software (user experience) but it's good to understand the implications of my change and hence it's good to understand that there is indeed a cost. – Peter Oct 09 '14 at 18:16
1

"A lot of calculations" is precisely what? Some hundred calculation along a few seconds, or heavy, huge matrix operations in a never-ending loop? 64-bit arithmetic additions and subtractions will double computation time, compared to analogous 32-bit operations. Multiplication may be relatively a little more costly, and division even more costly.

Assignment and arithmetic between differently-sized or differently-signed integers is generally supported transparently, even though it might cause loss of precision or (possibly unexpected) sign-extension expansions. The same transparency that might ease your work could also bring you some obscure problems.

I suppose you need a 64-bit variable because you represent values above well above 2G (or 4G). If you do so, you should never attempt to assign them to 32-bit (or shorter) variables. If you suspect it could happen, you should either change all variables or properties involved, or use assertions to prevent accidental precision loss.

Shorter-to-larger assignments should be always OK, provided you don't mix signed and unsigned variables.

Language features, such as operator overloads or templates and template instantiation, may require special considerations.

Paulo1205
  • 918
  • 4
  • 9
  • I believe I have things under control wrt Larger-to-Shorter assignments. I usually deal with unsigned variables but in the case where I do need to convert an int (32bit) to a signed __int64, and in case the int is negative, is this not converted properly to a same negative value in the 64bit int ? – Peter Oct 09 '14 at 18:21