0

I have this operation:

uint32_t DIM = // ...
int32_t x = // ...

// Operation:
x & (DIM-1u)

How does implicit type conversion work in the statement x & (DIM-1u)?

  • Does it convert x to uint32_t?
  • Or (DIM-1u) to int32_t?
  • Also, what would be the result type? Is it uint32_t or int32_t
Megidd
  • 7,089
  • 6
  • 65
  • 142

1 Answers1

1

Two scenarios, noting that 1u is a literal of type unsigned:

  1. unsigned is in the inclusive range of 16 bits to 31 bits. The type of DIM - 1u is uint32_t, and the whole expression is uint32_t. This is because the signed type in a binary expression where the other argument is an unsigned type is converted implicitly to unsigned.

  2. unsigned is 32 bits or larger. Then the type of DIM - 1u is unsigned, and the same for the type of the whole expression.


Finally, note that the C++ standard permits unsigned and std::uint32_t to be the same type; i.e.

std::cout << std::is_same<std::uint32_t, unsigned>::value;

is allowed to be 1.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483