0

Consider I have a timer that returns a uint32_t value (representing a number of ticks), always counts upwards, and wraps to 0 after reaching UINT32_MAX.

Suppose I need to take an elapsed time from time a to time b, and I don't know how high the timer might be initially and whether it will wrap between a and b. Both a and b are type uint32_t and get assigned to the timer's return value.

Is it a correct statement that we can take (uint32_t)(b-a) to get the elapsed time so long as no more than UINT32_MAX ticks have elapsed — and will it be correct even if the timer wrapped once? What is the proof for this?

double-beep
  • 5,031
  • 17
  • 33
  • 41
user553702
  • 2,819
  • 5
  • 23
  • 27

1 Answers1

1

Let N = 232. Let A and B be the timestamps of the start and end before wrapping to the [0, N) range, and assume A ≤ B < A + N. Then a = A % N and b = B % N. We are interested in computing the duration D = B - A.

When a ≤ b, it is trivial that D = B - A = b - a.

What about when a > b? Then a ≤ b + N and it must be that D = B - A = b + N - a.

But b - a is of course congruent b + N - a modulo N. Since addition and subtraction between std::uint32_t is all modulo N, you can safely compute your answer as D = b - a. The subtraction operator between two std::uint32_t values is already a std::uint32_t, so there's no reason to specify a cast as in (std::uint32_t)(b - a).

Timothy Shields
  • 75,459
  • 18
  • 120
  • 173