6

The <cstdint> (<stdint.h>) header defines several integral types and their names follow this pattern: intN_t, where N is the number of bits, not bytes.

Given that a byte is not strictly defined as being 8 bits in length, why aren't these types defined as, for example, int1_t instead of int8_t? Wouldn't that be more appropriate since it takes into account machines that have bytes of unusual lengths?

Paul Manta
  • 30,618
  • 31
  • 128
  • 208
  • 4
    the whole point of the exact-width types is that they do *not* depend on architecture; they even mandate a representation (two's complement without padding), which is not true for arbitrary integer types... – Christoph Mar 18 '12 at 12:30
  • Do you have a case where this would be beneficial? – Pubby Mar 18 '12 at 12:31

3 Answers3

6

On machines that don't have exactly those sizes the types are not defined. That is, if you machine doesn't have an 8-bit byte then int8_t would not be available. You would still however have the least versions available, such as int_least16_t.

The reason one suspects is that if you want a precise size you usually want a bit-size and not really an abstract byte size. For example all internet protocols deal with an 8-bit byte, thus you'd want to have 8-bits, whether that is a native byte size or not.


This answer is also quite informative in this regards.

Community
  • 1
  • 1
edA-qa mort-ora-y
  • 30,295
  • 39
  • 137
  • 267
5

int32_t could be a 4-byte 8-bits-per-byte type, or it could be a 2-byte 16-bits-per-byte type, or it could be a 1-byte 32-bits-per-byte type. It doesn't matter for the values you can store in it.

3

The idea of using those types is to make explicit the number of bits you can store into the variable. As you pointed, different architectures may have different byte sizes, so having the number of bytes doesn't guarantee the number of bits your variable can handle.

guga
  • 714
  • 1
  • 5
  • 15