The size_t
type is unsigned. The subtraction of any two size_t
values is defined-behavior
However, firstly, the result is implementation-defined if a larger value is subtracted from a smaller one. The result is the mathematical value, reduced to the smallest positive residue modulo SIZE_T_MAX + 1
. For instance if the largest value of size_t
is 65535, and the result of subtracting two size_t
values is -3, then the result will be 65536 - 3 = 65533. On a different compiler or machine with a different size_t
, the numeric value will be different.
Secondly, a size_t
value might be out of range of the type int
. If that is the case, we get a second implementation-defined result arising from the forced conversion. In this situation, any behavior can apply; it just has to be documented by the implementation, and the conversion must not fail. For instance, the result could be clamped into the int
range, producing INT_MAX
. A common behavior seen on two's complement machines (virtually all) in the conversion of wider (or equal width) unsigned types to narrower signed types is simple bit truncation: enough bits are taken from the unsigned value to fill the signed value, including its sign bit.
Because of the way two's complement works, if the original arithmetically correct abstract result itself fits into int
, then the conversion will produce that result.
For instance, suppose that the subtraction of a pair of 64 bit size_t
values on a two's complement machine yields the abstract arithmetic value -3, which is becomes the positive value 0xFFFFFFFFFFFFFFFD
. When this is coerced into a 32 bit int
, then the common behavior seen in many compilers for two's complement machines is that the lower 32 bits are taken as the image of the resulting int
: 0xFFFFFFFD
. And, of course, that is just the value -3 in 32 bits.
So the upshot is, that your code is de facto quite portable because virtually all mainstream machines are two's complement with conversion rules based on sign extension and bit truncation, including between signed and unsigned.
Except that sign extension doesn't occur when a value is widened while converting from unsigned to signed. Thus he one problem is the rare situation in which int
is wider than size_t
. If a 16 bit size_t
result is 65533, due to 4 being subtracted from 1, this will not produce a -3 when converted to a 32 bit int
; it will produce 65533!