Some implementations guarantee that any attempt to read the value of a word-size-or-smaller object that isn't qualified volatile
around the time that it changes will either yield an old or new value, chosen arbitrarily. In cases where this guarantee would be useful, the cost for a compiler to consistently uphold it would generally be less than the cost of working around its absence (among other things, because any ways by which programmers could work around its absence would restrict a compiler's freedom to choose between an old or new value).
In some other implementations, however, even operations that would seem like they should involve a single read of a value might yield code that combines the results from multiple reads. When ARM gcc 9.2.1 is invoked with command-line arguments -xc -O2 -mcpu=cortex-m0
and given:
#include <stdint.h>
#include <string.h>
#if 1
uint16_t test(uint16_t *p)
{
uint16_t q = *p;
return q - (q >> 15);
}
it generates code which reads from *p
and then from *(int16_t*)p
, shifts the latter right by 15, and adds that to the former. If the value of *p
were to change between the two reads, this could cause the function to return 0xFFFF, a value which should be impossible.
Unfortunately, many people who design compilers so that they will always refrain from "splitting" reads in such fashion think such behavior is sufficiently natural and obvious that there's no particular reason to expressly document the fact that they never do anything else. Meanwhile, some other compiler writers figure that because the Standard allows compilers to split reads even when there's no reason to (splitting the read in the above code makes it bigger and slower than it would be if it simply read the value once) any code that would rely upon compilers refraining from such "optimizations" is "broken".