0

I have a large boost::ptr_vector whose content I need to read from disk at runtime.

Currently I'm using a boost::serialization binary archive to write and read the ptr_vector. Would reading from disk at runtime be faster if I used a memory mapped file with boost::interprocess?

If so, does anyone have a mini example that shows how to do this?

Frank
  • 64,140
  • 93
  • 237
  • 324
  • There is a strong semantic difference between the two: `ptr_vector` owns its elements (one by one), whereas in the case of `mmap` you would have a single large chunk of memory and several pointers within that area. Also, note that in the case of `mmap` you would only be able to store POD (Plain Old Data) and pointers would have to be replaced by offsets from some origin. TL;DR: `mmap` is probably faster, but the data-structures are different and probably less intuitive. – Matthieu M. Oct 22 '13 at 07:47
  • @MatthieuM. Thanks, makes sense. If my data structure were just a `vector` of data chunks, where each data chunk may have a different size -- could I then use a memory map? Or would all data chunks in the `vector` have to be of the same size? – Frank Oct 22 '13 at 16:28
  • You could not be using a `vector` (the vector would not own the memory). If you use a `Chunk*` (array of chunks), then they all have to have the same size, if you use a `vector` then each can have a different size (in practice) though I would remind you than in theory `sizeof(Chunk)` is fixed and only C allows tail-padding (that is, the last member being a ` X[];`). – Matthieu M. Oct 22 '13 at 18:26
  • Thanks. I just don't understand how I would write and read a `vector` as memory mapped file. I'll open a new question for this. – Frank Oct 23 '13 at 01:20
  • It's here: http://stackoverflow.com/questions/19531243/how-to-read-write-vectorchunk-as-memory-mapped-files – Frank Oct 23 '13 at 01:33

0 Answers0