0

When getting a Network Time Protocol packet (NTP version 4, see here):

from contextlib import closing
from socket import socket, AF_INET, SOCK_DGRAM
import struct, time

start = time.time()
with closing(socket(AF_INET, SOCK_DGRAM)) as s:
    s.sendto('\x23' + 47 * '\0', ('pool.ntp.org', 123))       # NTP v4, see RFC 5905
    msg, address = s.recvfrom(1024)
now = time.time()

I usually get a roundtrip time now - start around 40 milliseconds.

However, with

format = "!4b4h9I"
unpacked = struct.unpack(format, msg[0:struct.calcsize(format)]) 
livnmode, stratum, poll, precision = unpacked[0:4]
print 'root_delay',  unpacked[4] + float(unpacked[5]) / 2**16                             # https://www.rfc-editor.org/rfc/rfc5905#page-13
print 'root_dispersion', unpacked[6] + float(unpacked[7]) / 2**16
print 'ref_id', unpacked[8]
print 'ref_timestamp  %.3f' % (unpacked[9] + float(unpacked[10]) / 2**32 - 2208988800L)
print 'orig_timestamp %.3f' % (unpacked[11] + float(unpacked[12]) / 2**32)
print 'recv_timestamp %.3f' % (unpacked[13] + float(unpacked[14]) / 2**32 - 2208988800L)
print 'tx_timestamp %.3f' % (unpacked[15] + float(unpacked[16]) / 2**32 - 2208988800L 

I get a root_delay of 0.00056 seconds which seems unlikely to be true! (I don't think I have a ping of 0.5 ms to time server, roundtrip time... this is really too small)

Question: how is exactly root_delay measured in NTP protocol?

Note:

  • The RFC 5905 states:

      Root Delay (rootdelay): Total round-trip delay to the reference
      clock, in NTP short format.
    
  • The more I relaunch the script, it seems that the root_delay decreases (even if my local computer RTC time isn't updated by my Python script... so that's strange...)

  • My parsing of root_delay seems to be correct, see https://www.rfc-editor.org/rfc/rfc5905#page-19:

      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |LI | VN  |Mode |    Stratum     |     Poll      |  Precision   |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                         Root Delay                            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                         Root Dispersion                       |
      ...
    

    and https://www.rfc-editor.org/rfc/rfc5905#page-13:

      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |          Seconds              |           Fraction            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    
                           NTP Short Format
    
  • I don't use ntplib which seems to have a different root_delay parsing (but not in accordance to https://www.rfc-editor.org/rfc/rfc5905#page-13?)

Community
  • 1
  • 1
Basj
  • 41,386
  • 99
  • 383
  • 673

1 Answers1

0

You are simply querying a remote 'server' as a 'client', the root_delay and root_dispersion it sends is the server's values relative to the Stratum-0 source.

You have to do your own math if you want to figure out your own root_delay & root_dispersion.

You can use the timestamp data you have to calculate out the round-trip for the packet, add the value the server sent as its root_delay, and now you have YOUR root_delay.

Jason
  • 83
  • 1
  • 6