3

I have what I thought was a simple pattern - I want to make a timepoint 5 seconds in the future, run a task which might take a while, and then sleep until that timepoint (potentially not sleeping at all if that time has already been reached.) However, whenever trying to use std::this_thread::sleep_until with a timepoint in the past, my application instead hangs forever. Here is an MCVE:

#include <chrono>
#include <thread>

int main(){
  std::this_thread::sleep_until(std::chrono::steady_clock::now() - std::chrono::seconds(1));
}

Using g++ (GCC) 4.8.5, this never returns. I've also tried system_clock with the same results. Using strace to examine what's happening, the last thing I get is:

nanosleep({4294967295, 0},

so I guess it would return eventually, but I don't feel like waiting that long.

Is this a g++ bug? I can't imagine this behavior is intentional. I found the question Is behaviour well-defined when sleep_until() specifies a time point in the past? but it doesn't seem that any conclusion was actually reached on whether or not the standard actually specifies what should happen. I've since implemented another solution to my problem; I'm just curious as to whether or not what I'm seeing is UB or a bug.

Jarod42
  • 203,559
  • 14
  • 181
  • 302
Kevin
  • 510
  • 4
  • 16
  • 3
    This looks like a bug. On 4.9 or newer it does not hang. – NathanOliver Aug 08 '18 at 20:45
  • My answer was wrong, so I deleted it. The wording in the standard doesn't really specify, as far as I can parse it, what happens when the time_point passed is in the past. It still looks like a bug, though. – Not a real meerkat Aug 08 '18 at 21:12
  • It does specify, however, that if the clock is adjusted to after said time_point, it should wake as soon as possible. This is what I (wrongly) quoted in my answer. – Not a real meerkat Aug 08 '18 at 21:14
  • @CássioRenan: I'm the one that upvoted your answer before you deleted it, and I'm also the one who led the standardization process on `` and the `_until` functions. If the spec doesn't say this is a bug, then there's also a bug in the spec. If you have suggestions on how to improve the spec, [here is how to go about that](http://cplusplus.github.io/LWG/lwg-active.html#submit_issue). – Howard Hinnant Aug 08 '18 at 21:17
  • @CássioRenan: If you like, I can vote to undelete your answer (so you don't have to re-type it if you are so inclined). – Howard Hinnant Aug 08 '18 at 21:23
  • @HowardHinnant thanks for your comments! I'm not really sure where to start on suggesting improvements, since I'm not sure I got everything right. About the answer: I could undelete it, since I deleted it myself, but it's still wrong, because it doesn't answer the question. (The clock is never *adjusted during the timeout* in the question). – Not a real meerkat Aug 08 '18 at 21:30
  • If I were writing the answer, I would do what you did, except emphasize "Given a clock time point argument Ct, the clock time point of the return from timeout should be Ct+Di+Dm when the clock is not adjusted during the timeout.". Then note that this is true whether or not Ct is in the past or future, and the implementation _should_ strive to minimize Di and Dm (which are both non-negative). – Howard Hinnant Aug 08 '18 at 21:35
  • 1
    @HowardHinnant I know Stackoverflow is not really the place for "Thanks" and all of that, but I must say our small exchange here inspired me to study more, and I finally can read this section of the standard and understand it well enough to say I know what I'm talking about, which I was not confortable with doing before. So, thanks a lot! (BTW, the bug is fixed on gcc: I've updated the answer). – Not a real meerkat May 09 '19 at 18:16

1 Answers1

5

Looks like a bug:

30.2.4 Timing specifications [thread.req.timing]

4   The functions whose names end in _­until take an argument that specifies a time point. These functions produce absolute timeouts. Implementations should use the clock specified in the time point to measure time for these functions. Given a clock time point argument Ct, the clock time point of the return from timeout should be Ct+Di+Dm when the clock is not adjusted during the timeout. (...)

Where Di is defined as a "quality of implementation" delay, and Dm is defined as a "quality of management" delay.

As Howard Hinnant brilliantly emphasizes, an implementation should strive to minimize Di and Dm:

30.2.4 Timing specifications [thread.req.timing]

2   Implementations necessarily have some delay in returning from a timeout. Any overhead in interrupt response, function return, and scheduling induces a “quality of implementation” delay, expressed as duration Di. Ideally, this delay would be zero. Further, any contention for processor and memory resources induces a “quality of management” delay, expressed as duration Dm. The delay durations may vary from timeout to timeout, but in all cases shorter is better.

Note that this must be true no matter what the value of Ct is, and that an infinite delay is definitely not minimal.

As a small update, this is now fixed, as of version 4.9.3. Here is the information on the bug tracker.

Not a real meerkat
  • 5,604
  • 1
  • 24
  • 55