0

So, it's a classic but I can't find a decisive answer anywhere.
Suppose I've an integer 'x'. I'm performing the "++" operation in a loop. Something like this:

int x=0;
while(true){
    x++;
    print(x);
} 

I suppose the output will rotate form some max value to some min value, but what those values are?
And does it also depend on the programming language in use?

Joe
  • 41,484
  • 20
  • 104
  • 125
BlueFlame
  • 31
  • 1
  • 1
  • 5
  • What I can understand is this. If the lang uses 32 bit, the value will rotate from (- 2^31) to (+ 2^31 -1) for signed and from 0 to (2^32 -1) for unsigned. Am I fully accurate here? – BlueFlame Jun 03 '13 at 15:02
  • That sounds like a reasonable assumption. Incidentally, your compiler probably knows more than we do.: set x to 2^31 -2, and see what happens when you increment twice. – KidneyChris Jun 04 '13 at 10:19

3 Answers3

0

A couple of references to compare primitive data types and their defauls/max/min values:

http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html

http://www.cplusplus.com/reference/climits/

Farlan
  • 1,860
  • 2
  • 28
  • 37
0

It very much does depend on the language, and how that language is implemented with regards to what happens next.

The max and min values additionally will depend on whether the datatype (int, in this case) is signed or unsigned (i.e. can it be a negative number?), and how many bits are used to implement it (a 16-bit integer won't hold as large a number as a 64-bit integer). Your language may or may not specify how many bits-to-the-int.

If you're using C, you can #include <limits.h> to get access to the INT_MAX macro, which will tell you the biggest number an int can hold on your system under your compiler. INT_MIN correspondingly gives you the smallest value.

KidneyChris
  • 857
  • 5
  • 12
  • What I can understand is this. If the lang uses 32 bit, the value will rotate from (- 2^31) to (+ 2^31 -1) for signed and from 0 to (2^32 -1) for unsigned. Am I fully accurate here? – BlueFlame Jun 03 '13 at 15:01
  • Yes, that is correct, the reason is because to represent the sign we use the highest order bit, effectively reducing the range of representable values. Effectively making a 32 bit signed number a 31 bit range – RyanS Sep 12 '13 at 17:54
0

yes, in java for example integer.MAX_VALUE + 1 ends up being integer.MIN_VALUE, also integer.MIN_VALUE - 1 = integer.MAX_VALUE. I'm not sure how all languages handle this but there's an example for you.

Josh
  • 730
  • 7
  • 14