Why is the cast to this type necessary,
It is not.
rather than passing in a uint32_t and casting to itself?
That's what I would do.
Is this undefined behavior?
Maybe. But it definitely relies on implementation-defined behavior.
There are both general principles and problem-specific ones to consider here.
The most relevant general principle is the definition of uintptr_t
, which you quoted. It tells you that uintptr_t
can represent a distinct value corresponding to each distinct, valid void *
value, so you can be confident that converting a void *
to type uintptr_t
will not produce a loss of fidelity. In general, then, if you want to represent an object pointer as an integer, uintptr_t
is the integer type to choose.
It is relatively common to conclude that uintptr_t
must be the same size as a void *
, but although that's often true, the language spec places no such requirement. Since uintptr_t
needs only to provide distinct representations for valid pointer values, and also because distinct void *
bit patterns don't have to represent distinct addresses, uintptr_t
could conceivably be smaller than void *
. On the other hand, it can fulfill its role just fine if it is larger than void *
.
Moreover, the language spec requires that you can round-trip pointers through type uintptr_t
, but it does not require that you can round-trip any variety of integer through a pointer. The results of most integer-to-pointer conversions are implementation defined. That is, given this ...
uintptr_t x;
// .. assign a value to x ...
... the language spec allows this to print "unequal":
if (x == (uintptr_t)(void *) x) {
puts("equal");
} else {
puts("unequal");
}
But in this specific case,
the upper bound on the values to be conveyed is read from an object of type uint32_t
, and therefore all values to be conveyed are representable by that type; and
the program is assuming a C implementation in which the integer --> pointer --> integer transit reproduces the original value for all the integer values to be conveyed.
Under these circumstances, language semantics present no reason to prefer uintptr_t
over uint32_t
as the integer type involved. That is, if the code presented works correctly, then a version in which uinptr_t
is replaced replaced with uint32_t
must also work correctly. And I find the latter alternative cleaner and clearer.