The program in the question does not contain any code to read the values from memory. If i = &n;
is accepted by the compiler, it merely sets i
to the address of n
and does not read any bytes of n
. Additionally, 2864434397
overflows an int
, so the result of n = 2864434397;
is implementation-defined.
To examine the individual bytes in memory, we can use this:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
// Use unsigned int so we can avoid complications from a sign bit.
unsigned int n = 0xaabbccdd;
/* Use a pointer (marked with "*") to hold the address of n.
Use a pointer to unsigned char so we can address the individual bytes.
*/
unsigned char *p = (unsigned char *) &n;
// Use a loop to iterate through the number of bytes in n.
for (size_t i = 0; i < sizeof n; ++i)
// Print each unsigned char (format hhx) in n.
printf("Byte %zu is 0x%02hhx.\n", i, p[i]);
}
The bytes in memory may appear in the order AA16, BB16, CC16, DD16, but they may appear in other orders. In the C implementation I am using, the output of the program is:
Byte 0 is 0xdd.
Byte 1 is 0xcc.
Byte 2 is 0xbb.
Byte 3 is 0xaa.
Paragraph 6.2.6.1 2 of the 2018 C standard says the C implementation (mostly the compiler) defines the order in which the bytes of an object such as int
are stored:
Except for bit-fields, objects are composed of contiguous sequences of one or more bytes, the number, order, and encoding of which are either explicitly specified or implementation-defined.
Most C implementations use a byte ordering that matches the computer processor they are targeting. However, there are situations in which this is not the case:
- Some processors let software select endianness. (Endianness refers to whether the “big end” of an integer, its high-value bits, or its “little end,” the low-value bits, are stored at the lower byte address in memory.)
- A C implementation might be designed to support old software that needs a particular byte order.
- The bytes of an object might be partly determined by the processor and partly by the compiler. For example, on a “16-bit” processor that only supports 16-bit arithmetic and 16-bit loads and stores, a compiler might support a 32-bit integer type in software, but using multiple instructions to load it, to store it, and to do arithmetic. In this case, the 32-bit integer could have two 16-bit parts. The order of the bytes in the 16-bit parts could be determined by the processor, but the order of the two parts would be entirely up to the compiler. So the bytes could appear in memory in the order CC16, DD16, AA16, BB16.