I'm sharing many memory blocks between two processes in Linux, one process receives data from network, and the other uses the data in those blocks (buffers).
Some buffers are small (hundreds bytes), and some buffers are very large(2G~4G bytes), while they all have same struct (array of a struct) and will be treated with a same algorithm.
I can NOT allocate all buffers as maximal size at beginning, because that will exceed the total memory of system too far. I have to periodically check and re-allocate them respectively.
The problem is that I have to make the client process re-mmap the block, after the server process enlarge (re-allocate) the buffer on the same name.
Regarding performance, is there a way, I can allocate a buffer with some manner likes "lazy allocating"? i.e. it will occupy very small real memory at the beginning but always seat a same virtual address even through I continuously write data resulting that it occupys more and more real memory, and the client does NOT need to re-mmap the buffer, and always can access the data with a fixed virtual address in its own virtual space.
If any, I don't need to do a lot of IPC/locking/sync things. If any, should I set a very huge swap space?