2

Can I set all bits in an unsigned variable of any width to 1s without triggering a sign conversion error (-Wsign-conversion) using the same literal?

Without -Wsign-conversion I could:

#define ALL_BITS_SET (-1)
uint32_t mask_32 = ALL_BITS_SET;
uint64_t mask_64 = ALL_BITS_SET;
uintptr_t mask_ptr = ALL_BITS_SET << 12; // here's the narrow problem!

But with -Wsign-conversion I'm stumped.

error: negative integer implicitly converted to unsigned type [-Werror=sign-conversion]

I've tried (~0) and (~0U) but no dice. The preprocessor promotes the first to int, which triggers -Wsign-conversion, and the second doesn't promote past 32 bits and only sets the lower 32 bits of the 64 bit variable.

Am I out of luck?

EDIT: Just to clarify, I'm using the defined ALL_BITS_SET in many places throughout the project, so I hesitate to litter the source with things like (~(uint32_t)0) and (~(uintptr_t)0).

Thomas Dickey
  • 51,086
  • 7
  • 70
  • 105

3 Answers3

0

Try

uint32_t  mask_32  = ~((uint32_t)0);
uint64_t  mask_64  = ~((uint64_t)0);
uintptr_t mask_ptr = ~((uintptr_t)0);

Maybe clearer solutions exist - this one being a bit pedantic but confident it meets your needs.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • Thanks for the answer, but I was looking for something I could use in a define that works in all cases. #define ALL_BITS_SET (-1) would work great if I didn't have to worry about sign conversion :( – user2543379 Jul 02 '13 at 21:11
  • Partial solution: Use the widest as your ALL_BITS_SET and then use ((uint32_t) ALL_BITS_SET), etc. – chux - Reinstate Monica Jul 02 '13 at 21:36
0

one's complement change all zeros to ones, and vise versa.

so try

#define ALL_BITS_SET (~(0))
uint32_t mask_32 = ALL_BITS_SET;
uint64_t mask_64 = ALL_BITS_SET;
aah134
  • 860
  • 12
  • 25
  • Unfortunately, this answer still triggers "negative integer implicitly converted to unsigned type" in my dev environment. I'm using -Wsign-conversion option for GCC. – user2543379 Jul 02 '13 at 21:14
0

The reason you're getting the warning "negative integer implicitly converted to unsigned type" is that 0 is a literal integer value. As a literal integer value, it of type int, which is a signed type, so (~(0)), as an all-bits-one value of type int, has the value of (int)-1. The only way to convert a negative value to an unsigned value non-implicitly is, of course, to do it explicitly, but you appear to have already rejected the suggestion of using a type-appropriate cast. Alternative options:

Obviously, you can also eliminate the implicit conversion to unsigned type by negating an unsigned 0... (~(0U)) but then you'd only have as many bits as are in an unsigned int

Write a slightly different macro, and use the macro to declare your variables

`#define ALL_BITS_VAR(type,name) type name = ~(type)0`
`ALL_BITS_VAR(uint64_t,mask_32);`

But that still only works for declarations.

Someone already suggested defining ALL_BITS_SET using the widest available type, which you rejected on the grounds of having an absurdly strict dev environment, but honestly, that's by far the best way to do it. If your development environment is really so strict as to forbid assignment of an unsigned value to an unsigned variable of a smaller type, (the result of which is very clearly defined and perfectly valid), then you really don't have a choice anymore, and have to do something type-specific:

#define ALL_BITS_ONE(type) (~(type)0)
uint32_t mask_32 = ALL_BITS_SET(uint32_t);
uint64_t mask_64 = ALL_BITS_SET(uint64_t);
uintptr_t mask_ptr = ALL_BITS_SET(uintptr_t) << 12;

That's all.

(Actually, that's not quite all... since you said that you're using GCC, there's some stuff you could do with GCC's typeof extension, but I still don't see how to make it work without a function macro that you pass a variable to.)

This isn't my real name
  • 4,869
  • 3
  • 17
  • 30