One Python process writes status updates to a file for other processes to read. In some circumstances, the status updates happen repeatedly and quickly in a loop. The easiest and fastest approach is to use to open().write() in one line:
open(statusfile,'w').write(status)
An alternate approach with four lines that force the data to disk. This has a significant performance penalty:
f = open(self.statusfile,'w')
f.write(status)
os.fsync(f)
f.close()
I'm not trying to protect from an OS crash. So, does the approach force the data to the OS buffer so other processes read the newest status data when they open the file from disk? Or, do I need to use os.fsync()?