3

Ok, I really have no idea why this is happening. I'm currently implementing a thread container which runs an infinite loop in a detached manner, limited to a certain speed between each iteration.

Header:

class timeloop
{
public:
    std::thread thread = { };
    bool state = true;

    void (*function_pointer)() = nullptr;

    double ratio = 1.0f;
    std::chrono::nanoseconds elapsed = { };

    timeloop(
        void (*function_pointer)() = nullptr
    );

    void function();
};

Definition:

void timeloop::start()
{
    this->thread = std::thread(
        &loop::function,
        this
    );
}

void timeloop::function()
{
    std::chrono::steady_clock::time_point next;

    std::chrono::steady_clock::time_point start;
    std::chrono::steady_clock::time_point end;

    while (
        this->state
        )
    {
        start = std::chrono::high_resolution_clock::now();
        next = start + std::chrono::nanoseconds(
            (long long) (this->ratio * (double) std::chrono::nanoseconds::period::den)
        );

        if (
            this->function_pointer != nullptr
            )
        {
            this->function_pointer();
        }

        /***************************
            this is the culprit
        ***************************/
        std::this_thread::sleep_until(
            next
        );

        end = std::chrono::high_resolution_clock::now();
        this->elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(
            end - start
            );
    }
}

Calling code:

timeloop* thread_draw = new timeloop(
    &some_void_function
);
thread_draw->ratio = 1.0 / 128.0;

thread_draw->start();
thread_draw->thread.detach();

The definition code is behaving weirdly, specifically std::this_thread::sleep_until. With this->ratio = 1.0 / 128.0 I'm expecting a framerate of around 128, the computed values of start and next reinforce this, yet it inexplicably hovers at around 60. And yeah, I tried just dividing next by 2, but that actually made it drop to around 40.

Extra code to verify the normal time to sleep for:

auto diff = std::chrono::nanoseconds(
    next - start
).count() / (double) std::chrono::nanoseconds::period::den;
auto equal = diff == this->ratio;

where equal evaluates to true.

Frame rate calculation:

double time = (double) thread_draw->elapsed.count() / (double) std::chrono::nanoseconds::period::den;
double fps = 1.0 / time;

Though I also used external FPS counters to verify (NVIDIA ShadowPlay and RivaTuner/MSI Afterburner), and they were in a range of about +-5 of the calculated value.

And I know it's std::this_thread::sleep_until because once I comment that out, the frame rate jumps up to around 2000. Yeah...

I'm truly baffled at this, especially seeing how I can't find any evidence of anybody else ever having had this problem. And yes, I'm aware that sleep functions aren't perfectly accurate, and there's bound to be hiccups every now and then, but consistently sleeping for pretty much double the scheduled time is just absurd.

Did I perhaps misconfigure a compiler option or something? It's definitely not a performance problem, and I'm reasonably sure it's not a logic error either (seeing how all the calculations check out) [unless I'm abusing chrono somewhere].

TARN4T1ON
  • 522
  • 2
  • 12
  • By setting your `next` point relative to a call to `now()` every time you lose any advantage absolute time would have given you. Mayb this will help: https://stackoverflow.com/questions/35468032/running-code-every-x-seconds-no-matter-how-long-execution-within-loop-takes/35468802#35468802 – Galik Mar 28 '21 at 13:35

2 Answers2

2

There are no guarantees on resolution of sleep_until, you are only guaranteed the thread will not be woken before the timepoint. If you are implementing the main game loop, read Fix your timestep.

Using sleep to guarantee timing is a terrible way to do it. You are at mercy of OS scheduler and e.g. Windows has a minimal sleep amount about 10 milliseconds I believe. (If the implementation actually asks the OS to put the thread to sleep and the OS decides to do a context switch.)

The lag might also be caused by VSync in the drawing thread if you are calling glfwSwapBuffers or similar. That would explain why your are limited to 60FPS, but not why commenting sleep solves the problem.

So my guess is the OS's sleep above. I would recommend to remove the sleep and rely on VSync, that's the right frequency you want to draw at anyway. Synchronization with logic threads will be a pain in... but that's always the case.

Quimby
  • 17,735
  • 4
  • 35
  • 55
  • It's really peculiar because I used basically the same code in a different project and it worked perfectly fine. On the same machine too. Definitely no 10ms minimum delay, it was spot on, even when targeting higher frame rates (like 200+). Either way, what I gathered from that article is basically use a while loop and just do what the OS basically does yourself (even if it's slightly more expensive). – TARN4T1ON Mar 28 '21 at 13:20
  • @TARN4T1ON Not sure then, you can try to make a simple program that sleeps in the main and see what is the minimal amount you can get. – Quimby Mar 28 '21 at 13:23
  • 1
    Yeah, the article is more physics-oriented but it's good advice overall for timing-related loops. It does not use sleep precisely because it is not reliable. – Quimby Mar 28 '21 at 13:24
  • I actually took a look at the project I first used this sort of code in, and it doesn't work there anymore either. The builds of that lag too. Really funky. Might honestly be some recent update, some other things have stopped working for me too (like Visual Studio's Performance Profiler, which I actually wanted to use to get to the bottom of this earlier, but yeah...). I'll just use the while-loop. Thanks! – TARN4T1ON Mar 28 '21 at 13:50
  • 1
    [Here](https://stackoverflow.com/a/59446610/576911) is a quality chrono implementation of [Fix your timestep](https://gafferongames.com/post/fix_your_timestep/). – Howard Hinnant Mar 28 '21 at 19:12
1

I think your problem is you are not using absolute timing because you keep resetting your absolute time point next relative to the current time now().

Try changing this:

while (
    this->state
    )
{
    start = std::chrono::high_resolution_clock::now();
    next = start + std::chrono::nanoseconds(
        (long long) (this->ratio * (double) std::chrono::nanoseconds::period::den)
    );
// ...

to this

// before the loop
next = std::chrono::high_resolution_clock::now();

while (
    this->state
    )
{
    // keep the timing absolute by adding to the absolute time point
    next = next + std::chrono::nanoseconds(
        (long long) (this->ratio * (double) std::chrono::nanoseconds::period::den)
    );

// ...

That way you call now() only once and then all your subsequent timings are absolutely (as opposed to relatively) calculated from that point.

Edited to add:

Additionally, I would avoid using std::chrono::high_resolution_clock. It is often just an alias of std::chrono::system_clock which is subject to random time alterations as the system clock attempts to remain synchronized with internet time.

Use std::chrono::steady_clock.

Galik
  • 47,303
  • 4
  • 80
  • 117
  • That's how it used to work, sad to say it doesn't make a difference. Also, this has the potential of lagging behind too far (if the main function takes too long too often, or potentially the entire time), at which point it doesn't even limit anything anyway. – TARN4T1ON Mar 28 '21 at 13:45
  • @TARN4T1ON It seems to me that, if you want regular timing, then the work you want to do in that time must be doable. If the main function takes too long then there is nothing you can really do because your basic problem is that the main function takes too long... But regardless, the way you are doing it now will inevitably drift because it is not anchored to an absolute time base. – Galik Mar 28 '21 at 13:50
  • @TARN4T1ON I also added a note about `std::chrono::high_resolution_clock`. You should be using `std::chrono::steady_clock` – Galik Mar 28 '21 at 13:55
  • You're right about that, I changed it when I was in the trial and error stage and didn't change it back. But it seems like sleep is much worse than I remember (though I could swear it worked just fine a few months ago). Thanks for your help. – TARN4T1ON Mar 28 '21 at 14:00