0

Suppose I have a (single-threaded) program that (exclusively) opens a large file and does two non-overlapping writes:

int fd = open(path, O_RDWR);
pwrite(fd, data1, size1, offset1);
pwrite(fd, data2, size2, offset2);
close(fd);

Are there any guarantees (by posix, linux or common filesystems like ext4) that, in case of power failure, no part of data2 will end up in permanent storage unless all of data1 also ends up in permanent storage?

Or, to put it another way, that the file (in permanent storage) won't end up in a state where the second write started while the first hadn't completed?

Or do I have to fsync(fd)/fdatasync(fd) between the writes to achieve this?

yuri kilochek
  • 12,709
  • 2
  • 32
  • 59
  • Most disks have cache so even fsync is not a guarantee. – stark Apr 01 '22 at 11:04
  • @starkbl sure, but the cache could be "sequential" in the sense that older writes get written to permanent storage earlier. Unless there is some mechanism to ensure something of this sort, I don't see how databases can possibly guarantee data durability. – yuri kilochek Apr 01 '22 at 11:53
  • https://serverfault.com/q/460864/115396 – stark Apr 01 '22 at 12:41

0 Answers0