2

I am running Ubuntu 15.04 with a linux kernel 3.19.0-26-generic, my filesystem is the default ext4.

I run a test where I:

  1. Open/Write/Close a file.
  2. Get the modification time (from stat in nanoseconds)
  3. Sleep for 1 millisecond
  4. Open/Write/Close the file again with different content.
  5. Get the new modification time (from stat in nanoseconds)
  6. Compare the times from 2 and 5.

What I observe in about 50% of cases is that the two times are identical. I expected them to be always different (5 greater than 2).

Is it the case that the resolution of the modification times on ext4 is lower than nanosecond (even though stat can report nanosecond-accuracy), or is their a bug in my test? If ext4 stores lower than nanosecond accuracy, than what is the real resolution of the modification times? and where is this documented?

Andrew Tomazos
  • 66,139
  • 40
  • 186
  • 319
  • 1
    I'm not sure about ext4, but it can well be that ext4 uses a ns precison higher that the resolution of the actual system time. This to be on the safe side for the future. – too honest for this site Sep 02 '15 at 16:50
  • 4
    [http://stackoverflow.com/questions/14392975/timestamp-accuracy-on-ext4-sub-millsecond](http://stackoverflow.com/questions/14392975/timestamp-accuracy-on-ext4-sub-millsecond) – 4566976 Sep 02 '15 at 16:59

0 Answers0