I'm trying to cast a uint64_t (representing time in nanoseconds from D-day using a boost chrono high precision clock) to a uint32_t in order to seed a random number generator.
I just want the least significant 32 bits of the uint64_t. Here is my attempt:
uint64_t ticks64 = dtn.count(); // This has the ticks in nanosec
uint64_t ticks32_manual = ticks64 & 0xFFFFFFFF;
uint32_t ticks32_auto = (uint32_t) ticks64;
mexPrintf("Periods: %llu\n", ticks64);
mexPrintf("32-bit manual truncation: %llu\n", ticks32_manual);
mexPrintf("32-bit automatic truncation: %u\n", ticks32_auto);
The output of my code is as follows:
Periods: 651444791362198
32-bit manual truncation: 1331774102
32-bit automatic truncation: 1331774102
I was expecting the last few digits of the 32 and original 64-bit representations to be the same, but they are not. That is, I thought I would "lose the left half" of the 64-bit number.
Can anyone explain what's going on here? Thanks.
Btw, I've seen this link.