I'm having trouble understanding the offset variable provided to the data applier for a dispatch_io_read function call. I see that the documentation claims the offset is the logical offset from the base of the data object. Looking at the source code for the dispatch_data_apply function confirms that this variable always starts from 0 for the first apply for a data chunk, and then is simply the sum of the range lengths.
I guess I don't understand the purpose of this variable then. I had originally assumed this was the offset for the entire read, but it's not. It seems you have to keep track of the bytes read and offset by that amount to actually properly do a read in libdispatch.
// Outside the dispatch_io_read handler...
char * currBufferPosition = destinationBuffer;
// Inside the dispatch_io_read handler...
dispatch_io_read(channel, fileOffset, bytesRequested, queue, ^(bool done, dispatch_data_t data, int error) {
// Note: Real code would handle error variable.
dispatch_data_apply(data, ^bool(dispatch_data_t region, size_t offset, const void * buffer, size_t size) {
memcpy(currBufferPosition, buffer, size);
currBufferPosition += size;
return true;
});
});
My question is: Is this the right way of using the data returned by dispatch_data_apply? And if so, what is the purpose of the offset variable passed into the applier handler? The documentation does not seem clear about this to me.