I have a rather naive question about speed performance of reading from a file in python. I am implementing an application which needs to read from a binary file. The data is organised in blocks (events, all occupying the same space and encoding the same type of information) where for each event it might not be necessary to read all info. I have written a function using memory mapping to maintain a file "pointer" which is used to seek() and read() from the file.
self.f = open("myFile", "rb")
self._mmf = mmap.mmap(self.f.fileno(), length=0, access=mmap.ACCESS_READ)
Since I know which bytes correspond to which info in the event, I was thinking of implementing a function to receive what type of information is needed from the event, seek() to that position and read() only the relevant bytes, then reposition the file pointer at the beginning of the event for eventual following calls (which might or might not be needed).
My question is, is this implementation necessarily expected to be slower wrt reading the entire event once (and eventually use this info only partially) or does it depend on the event size compared to how many calls of seek() I might have in the workflow?
Thanks!