I've got two values from an unsigned 48bit nanosecond counter, which may wrap.
I need the difference, in nanoseconds, of the two times.
I think I can assume that the readings were taken at roughly the same time, so of the two possible answers I think I'm safe taking the smallest.
They're both stored as uint64_t
. Because I don't think I can have 48 bit types.
I'd like to calculate the difference between them, as a signed integer (presumably int64_t
), accounting for the wrapping.
so e.g. if I start out with
x=5
y=3
then the result of x-y
is 2
, and will stay so if I increment both x
and y
, even as they wrap over the top of the max value 0xffffffffffff
Similarly if x=3, y=5, then x-y is -2, and will stay so whenever x and y are incremented simultaneously.
If I could declare x
,y
as uint48_t
, and the difference as int48_t
, then I think
int48_t diff = x - y;
would just work.
How do I simulate this behaviour with the 64-bit arithmetic I've got available?
(I think any computer this is likely to run on will use 2's complement arithmetic)
P.S. I can probably hack this out, but I wonder if there's a nice neat standard way to do this sort of thing, which the next person to read my code will be able to understand.
P.P.S Also, this code is going to end up in the tightest of tight loops, so something that will compile efficiently would be nice, so that if there has to be a choice, speed trumps readability.