1

I am working on a timer that gives timestamps for every interrupt at any amount of milliseconds the user wants it to shoot off at.

struct itimerspec takes seconds and nanoseconds when using:

it_interval (a member of the struct)

I want to let the user specify the milliseconds, so I am doing a conversion in the constructor:

mNanoseconds = milliseconds * 1000000 (1,000,000 nanoseconds are in a single millisecond).

When the user enters anything under 1000 milliseconds, the timer will operate normally, but when I use 1000 milliseconds for the interval, the timer doesn't operate at all. It just sits there. I am unsure if my conversion is the issue, or what it could be.

mNanoseconds is a uint64_t, so I don't think the size of the mNanoseconds integer is the issue.

Timer.cxx

std::mutex g_thread_mutex;
Timer::Timer(){};

Timer::Timer(int milliseconds)
    : mMilliseconds(milliseconds)
{
    mNanoseconds = milliseconds * 1000000;
    std::cout << mNanoseconds << " Nanoseconds\n";
}

std::string Timer::getTime()
{
    std::time_t result = std::time(nullptr);
    return (std::asctime(std::localtime(&result)));
}

void Timer::startIntervalTimer()
{    
    struct itimerspec itime;
    struct timeval tv;
    int count = 0;
    tv.tv_sec = 0; // seconds 
    tv.tv_usec = 0; // microseconds
  
    gettimeofday(&tv, NULL);

    //itime.it_interval.tv_sec = 2; //it_interval (value between successive timer expirations)
    itime.it_interval.tv_nsec = mNanoseconds;
    itime.it_value.tv_sec = itime.it_interval.tv_nsec;

    int fd = timerfd_create(CLOCK_REALTIME, 0 );
    timerfd_settime(fd, TFD_TIMER_ABSTIME, &itime, NULL);
    while(count != 10)
    {
        uint64_t exp;
        int n = read(fd, &exp, sizeof(exp));

        //We don't lock the read(), but lock the actions we take when the read expires.
        //There is a delay here- so not sure what that means for time accuracy
        //Started to look into atomic locking, but not sure if that works here
        g_thread_mutex.lock();
        std::string t = getTime();
        std::cout << t << "  fd = " << fd << "  count # " << count << std::endl;
        g_thread_mutex.unlock();

        count++;
    }  
    stopTimer(fd, itime, tv);
}
Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770
  • 2
    Please don't tag irrelevant languages. – Some programmer dude Feb 28 '23 at 16:24
  • What is the type of `mNanoseconds`? – Eugene Sh. Feb 28 '23 at 16:24
  • 1
    `milliseconds * 1000000` is still an `int` regardless of the type of `mNanoseconds`. – Kevin Feb 28 '23 at 16:26
  • @Kevin Right. And it shouldn't overflow with 32-bit `int`. Still can if `mNanoseconds` is a shorter type though :) – Eugene Sh. Feb 28 '23 at 16:31
  • 1
    _Side note:_ Instead of `gettimeofday`, I'd use `clock_gettime(CLOCK_REALTIME,...)` or `clock_gettime(CLOCK_MONOTONIC,...)` – Craig Estey Feb 28 '23 at 16:35
  • 1
    And what are the requirements for the timer and your assignments? Do you have to use POSIX clocks and timers? Or can you use [the C++ standard chrono library](https://en.cppreference.com/w/cpp/header/chrono)? – Some programmer dude Feb 28 '23 at 16:39
  • @Someprogrammerdude I thought this was still considered C in a C++ wrapper. Wilco for future questions. Also, I am required to use POSIX clocks right now. I had no idea that chrono could be used as a timer as well, though. I will have to check that out. – Mr.Longbottom Feb 28 '23 at 16:57

2 Answers2

2

Your nanosecond value is out of range. From the timerfd_settime man page:

timerfd_settime() can also fail with the following errors:

EINVAL

new_value is not properly initialized (one of the tv_nsec falls outside the range zero to 999,999,999).

You're setting your time up wrong:

itime.it_interval.tv_nsec = mNanoseconds;

Should be

itime.it_interval.tv_nsec = mNanoseconds % 1000000000;
itime.it_interval.tv_sec = mNanoseconds / 1000000000;

And make sure to check the return value of timerfd_settime.

Also, I'm not sure what you meant by

itime.it_value.tv_sec = itime.it_interval.tv_nsec;

Based on my reading from the man page, you're setting the initial expiration time in seconds to the nanoseconds value, which makes no sense. This should probably be something based on the current time. You also need to 0 out the rest of the fields on itime, otherwise the rest of the struct is filled with garbage data.

Kevin
  • 6,993
  • 1
  • 15
  • 24
  • Thanks for the tip on checking the return value of my timerfd_settime() function, I definitely need to remember to continue using error handlers. Also, thank you very much for the help. Furthermore, to address your last note about: "itime.it_value.tv_sec = itime.it_interval.tv_nsec;" I was honestly trying to do a lot of things before I just ended up seeing if someone on here could help me. I will make sure to clean up my code first before posting. – Mr.Longbottom Feb 28 '23 at 18:09
0

The possible reason that I can think of is:

milliseconds * 1000000 will be stored in a temp object of type int only, hence possible overflowed.

There is no sense of making milliseconds as int type here. You can typecast milliseconds to uint64_t:

static_cast<uint64_t>(milliseconds) * 1000000

The other solution is to change

nseconds = milliseconds;
nSeconds *= 100000;
Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770