4

According the the mlock() man page:

All pages that contain a part of the specified address range are guaranteed to be resident in RAM when the call returns successfully; the pages are guaranteed to stay in RAM until later unlocked.

Does this also guarantee that the physical address of these pages is constant throughout their lifetime, or until unlocked?

If not (that is, if it can be moved by the memory manager - but still remain in RAM), is there anything that can be said about the new location, or the event when such change occur?

UPDATE:

Can anything be said about the coherency of the locked pages in RAM? If the CPU has a cache, then does mlock-ing guarantee RAM coherency with the cache (assuming write-back cache)?

jdphenix
  • 15,022
  • 3
  • 41
  • 74
ysap
  • 7,723
  • 7
  • 59
  • 122
  • I can't find an explicit guarantee, but I am also struggling to think of a scenario where it would make sense to move an `mlock()`ed page in physical memory (while obviously keeping its virtual address). Nice question (+1) – NPE Mar 07 '13 at 15:46
  • Would you mind providing some context, i.e. what it is that your code does or needs to do where the answer to the question matters. – NPE Mar 07 '13 at 15:50
  • NPE, I am trying to figure out if it is safe to pass a pointer-to-a-locked-buffer to an accelerator hardware which has shared memory space with the host CPU. If pages can be moved, then it is not safe. – ysap Mar 07 '13 at 16:08
  • Awesome, thanks. I figured it was something along similar lines, but it's nice to have it confirmed. Out of interest, how are you converting the virtual address to the physical address? – NPE Mar 07 '13 at 16:10
  • The unique "physical address" concept need not exist in all architectures. Think NUMA. Memory attached to CPU A could very well be seen as a different physical address range from CPU B. – n. m. could be an AI Mar 07 '13 at 16:11
  • @NPE - using `mmap()` to map `/dev/mem` blocks. – ysap Mar 07 '13 at 16:15
  • On the update: `mlock()` just promises not to swap out pages, it does not touch MTRRs or anything like that. – n. m. could be an AI Mar 07 '13 at 16:19
  • @n.m., my understanding is that NUMA implies separate address regions per processor, but still a unified address space across a system. This means that you have performance penalty when accessing non-local data, but still all data is symmetrically accessible. – ysap Mar 07 '13 at 17:06
  • Unified logical addressing, yes, but not necessarily physical. – n. m. could be an AI Mar 07 '13 at 17:08
  • Btw Linux NUMA has a `migrate_pages` function. – n. m. could be an AI Mar 07 '13 at 17:10
  • Better look at the memory defrag code in Linux which clears space for 2M large page use. I don't know if it has an exception for mlocked pages or not. Also wouldn't this be best done by a specific hardware driver for this accelerator device? – Zan Lynx Mar 07 '13 at 18:07
  • @ZanLynx - hardware driver (kernel mode) is definitely the way to go, but it is not the case at hand. – ysap Mar 07 '13 at 18:08

1 Answers1

6

No. Pages that have been mlocked are managed using the kernel's unevictable LRU list. As the name suggests (and mlock() guarantees) these pages cannot be evicted from RAM. However, the pages can be migrated from one physical page frame to another. Here is an excerpt from Unevictable LRU Infrastructure (formatting added for clarity):

MIGRATING MLOCKED PAGES

A page that is being migrated has been isolated from the LRU lists and is held locked across unmapping of the page, updating the page's address space entry and copying the contents and state, until the page table entry has been replaced with an entry that refers to the new page. Linux supports migration of mlocked pages and other unevictable pages. This involves simply moving the PG_mlocked and PG_unevictable states from the old page to the new page.

mdittmer
  • 534
  • 3
  • 9
  • Thanks for the excerpt. This approves my suspicion that we cannot rely on the assumption of the page being stationary. – ysap Jan 22 '14 at 00:45