3

I have an embedded Linux device that interfaces with another "master" device over a serial comm protocol. Periodically the master passes its date down to the slave device, because later the slave will return information to the master that needs to be accurately timestamped. However, the Linux 'date' command only sets the system date to within a second accuracy. This isn't enough for our uses.

Does anybody know how to set a Linux machine's time more precisely than 1 second?

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
mdavis
  • 191
  • 3
  • 7
  • The 1 second precision is standard for Linux I believe. You can't just set it to a higher precision as everything else will break (the time is stored/sent as an integer). I'm pretty sure there's some command or something to get the number of milliseconds since midnight which you can pass to the device, and receive the milliseconds back from it in your code. If I knew Linux well enough I'd post it as an answer here, but I'm pretty sure you will have to pass another time value if you want sub-second precision. – Russ Jun 22 '12 at 22:38

4 Answers4

7

The settimeofday(2) method given in other answers has a serious problem: it does exactly what you say you want. :)

The problem with directly changing a system's time, instantaneously, is that it can confuse programs that get the time of day before and after the change if the adjustment was negative. That is, they can perceive time to go backwards.

The fix for this is adjtime(3) which is simple and portable, or adjtimex(2) which is complicated, powerful and Linux-specific. Both of these calls use sophisticated algorithms to slowly adjust the system time over some period, forward only, until the desired change is achieved.

By the way, are you sure you aren't reinventing the wheel here? I recommend that you read Julien Ridoux and Darryl Veitch's ACM Queue paper Principles of Robust Timing over the Internet. You're working on embedded systems, so I would expect the ringing in Figure 5 to give you cold shivers. Can you say "damped oscillator?" adjtime() and adjtimex() use this troubled algorithm, so in some sense I am arguing against my own advice above, but the Mills algorithm is still better than the step adjustment non-algorithm. If you choose to implement RADclock instead, so much the better.

Warren Young
  • 40,875
  • 8
  • 85
  • 101
  • 1
    You raise valid concerns. The code snippet I included in my answer is from an embedded system which has nanosecond precision obtained from a GPS receiver. In my case, it only executes that code if the current system time is more than one second different from GPS time, even at startup. This has proven to be very reliable in practice. – wallyk Jun 22 '12 at 23:22
  • ntpdate and ntpd all use adjtimex to adjust time, this is a reasonable method to gradually do it, not set once. – DaVid Nov 06 '12 at 15:39
4

The settimeofday() system call takes and uses microsecond precision. You'll have to write a short program to use it, but that is quite straightforward.

struct timeval tv;
tv .tv_sec = (some time_t value)
tv .tv_usec = (the number of microseconds after the second)
int rc = settimeofday (&tv, NULL);
if (rc)
        errormessage ("error %d setting system time", errno);
wallyk
  • 56,922
  • 16
  • 83
  • 148
2
  • You can use the settimeofday(2) system call; the interface supports microsecond resolution.

    #include <sys/time.h>
    
    int gettimeofday(struct timeval *tv, struct timezone *tz);
    int settimeofday(const struct timeval *tv, const struct timezone *tz);
    
       struct timeval {
           time_t      tv_sec;     /* seconds */
           suseconds_t tv_usec;    /* microseconds */
       };
    
  • You can use the clock_settime(2) system call; the interface provides multiple clocks and the interface supports nanosecond resolution.

    #include <time.h>
    
    int clock_getres(clockid_t clk_id, struct timespec *res);
    
    int clock_gettime(clockid_t clk_id, struct timespec *tp);
    
    int clock_settime(clockid_t clk_id, const struct timespec
    *tp);
    
       struct timespec {
           time_t   tv_sec;        /* seconds */
           long     tv_nsec;       /* nanoseconds */
       };
    
    
    CLOCK_REALTIME
          System-wide real-time clock.  Setting this clock
          requires appropriate privileges.
    
    CLOCK_MONOTONIC
          Clock that cannot be set and represents monotonic time
          since some unspecified starting point.
    
    CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
          Similar to CLOCK_MONOTONIC, but provides access to a
          raw hardware-based time that is not subject to NTP
          adjustments.
    
    CLOCK_PROCESS_CPUTIME_ID
          High-resolution per-process timer from the CPU.
    
    CLOCK_THREAD_CPUTIME_ID
          Thread-specific CPU-time clock.
    

    This interface provides the nicety of the clock_getres(2) call, which can tell you exactly what the resolution is -- just because the interface accepts nanoseconds doesn't mean it can actually support nanosecond-resolution. (I've got a fuzzy memory that 20 ns is about the limits of many systems but no references to support this.)

sarnold
  • 102,305
  • 22
  • 181
  • 238
1

If you're running an IP-capable networking protocol over the serial link (something like, ooh, PPP for example), you can just run an ntpd on the "master" host, then sync time using ntpd or ntpdate on the embedded device. NTP will take care of you.

ghoti
  • 45,319
  • 8
  • 65
  • 104
  • +1 for providing the only answer that doesn't require programming. The OP didn't mention a language, and only mentioned command-line tools. Perhaps this question should be on Superuser.com? – Graham Jun 26 '12 at 01:59