0

C, C++, C#, Java, Rust, etc. have signed ints by default. Most time you want unsigned variables, since cases where you have to represent something that can be below zero are less frequent than cases when you deal with natural numbers. Also unsigned variables don't have to be coded in 2's complement form and they have the most significant bit free for extra range of values.

Taking all this into account, why would creators of languages make ints signed by default?

svick
  • 236,525
  • 50
  • 385
  • 514
c_spk
  • 169
  • 16
  • 2
    "Most time you want unsigned variables, since cases where you have to represent something that can be below zero are less frequent than cases when you deal with natural numbers." Have you got any studies to back up this fascinating claim? – Sergey Kalinichenko Dec 12 '15 at 10:44
  • Looking back through SO C questions, it's quite difficult to find negative ints used for anything except '-1' for an error. – Martin James Dec 12 '15 at 10:53
  • @dasblinkenlight, no, I don't have. I'm not an experienced programmer, but for me it looks like my statement is true. You deal with sizes, indeces, counts, error codes, etc. more than with anything negative, don't you? – c_spk Dec 12 '15 at 10:55
  • @c_spk sizes, indexes, and counts should all be `size_t`, not `int`, and `size_t` is unsigned. Besides, you rarely use anything close to full range of `int`, so having an extra bit available makes little difference. – Sergey Kalinichenko Dec 12 '15 at 11:04
  • @dasblinkenlight, I, of course, know about `size_t`. If `int` was unsigned by default, `size_t` just would be a `typedef` of another type. I see no problem with it. – c_spk Dec 12 '15 at 11:11

4 Answers4

5

I think your basic claim is false. Negative numbers are very common in real life. Think of temperatures, bank account balances, SO question ans answer scores... Modeling physical data in computing requires a natural way to express negative quantities.

Indeed the second example in The C Programming Language by Brian Kernighan and Dennis Ritchie is a program to convert temperatures between the Fahrenheit and the Celcius scales. It is their very first example of a numeric application of the C language.

Array sizes are positive numbers indeed, but pointer offsets may be negative in C.

Other languages such as Ada specify the range for numeric variables, but arithmetic computation still assumes continuity at 0 and negative numbers are implied by this.

Unsigned arithmetic, as specified in C is actually confusing: 1U - 2U is greater than 0, just like -1U. Making this the default would be so counter-intuitive!

chqrlie
  • 131,814
  • 10
  • 121
  • 189
  • I'm not sure elevator floor "numbers" are a good case, since they sometimes go like this: B (basement), G (ground floor), M (mezzanine), 1 (first floor). Most of those labels are not actually numbers, and none of them are negative numbers. – svick Dec 12 '15 at 13:05
  • 1
    @svick: welcome to cultural differences. Where I come from, they go `-2`, `-1`, `0`, `1`, `2`... In some other parts of the world, they start at `1`, special case basement levels and toward upper floors, they even skip `13`. Let me think of a better example. – chqrlie Dec 12 '15 at 13:11
2

It goes a very long way back in time:

  • Integers (and reals) were (only) signed in the first versions of FORTRAN, ALGOL and LISP in circa 1960. (COBOL is the major exception.) The same applied to 70's era languages like Pascal and BCPL.

  • EDSAC (1949) supported (only) signed numbers.

In fact, C was one if the first languages that supported unsigned integers.

So ... why would creators of languages make ints signed by default?

Well, one reason is that prior to the C era, signed integers were the only kind that was available. So it was natural make the the default.

The other reason is that use-cases that require signed numbers are actually far more common than you realize. And there is another class of use-cases where it doesn't really matter whether integers are signed or not.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • 1
    While most of the early C language was designed by 1973, `unsigned` types were added to the language in 1977. Read this very interesting paper on *The Development of the C language* by Dennis M. Richie: https://ropas.snu.ac.kr/~kwang/4190.310/sociology/c-history.pdf – chqrlie Dec 12 '15 at 22:58
1

In this post someone did answer your question: Default int type: Signed or Unsigned?

Same quote as the accepted answer in that post:

On Unsigned Integers

Some people, including some textbook authors, recommend using unsigned types to represent numbers that are never negative. This is intended as a form of self-documentation. However, in C, the advantages of such documentation are outweighed by the real bugs it can introduce. Consider:

for (unsigned int i = foo.Length()-1; i >= 0; --i) ... This code will never terminate! Sometimes gcc will notice this bug and warn you, but often it will not. Equally bad bugs can occur when comparing signed and unsigned variables. Basically, C's type-promotion scheme causes unsigned types to behave differently than one might expect.

So, document that a variable is non-negative using assertions. Don't use an unsigned type.

Community
  • 1
  • 1
Sander
  • 171
  • 1
  • 10
  • Any decent compiler fitted with the appropriate warning level will issue a diagnostic about the test being always true. A better idiom for this loop is: `for (unsigned int i = foo.Length(); i-- > 0;) { ... }` – chqrlie Dec 12 '15 at 11:56
0

Python makes a default signed INT pool for represent -128 to 127(maybe another range, and by modifying source code, you can make this pool bigger or smaller) to use INT by reference. It makes Python work faster to do this because every time you need an INT of out that scope, it has to New an INT object.

And, personally, I usually use negative numbers as return value for bad things.

Put it together, I think there are lot of changes to use small negative numbers, and it makes signed int default valuable.

fding
  • 424
  • 1
  • 5
  • 18