I have the following huge page create source code in linux:
int iTotalByte = sizeof(datafeed)* ARRAYSIZE ;
conf = (datafeed*) mmap(0, iTotalByte , (PROT_READ | PROT_WRITE), MAP_PRIVATE|MAP_ANONYMOUS|MAP_POPULATE|MAP_HUGETLB , -1 , 0) ;
if(conf == MAP_FAILED)
{
printf(" mmap error ....\n") ;
exit( 0 ) ;
}
and it works well, numastat -m
will see the huge page table how many MB the application create.
The following is the source I create shared memory used for IPC:
int shm_fd;
if((shm_fd = shm_open(THE_FILE, (O_CREAT | O_EXCL | O_RDWR),
(S_IREAD | S_IWRITE))) > 0 ) {
; /* We are the first instance */
}
else if((shm_fd = shm_open(THE_FILE, (O_CREAT | O_RDWR),
(S_IREAD | S_IWRITE))) < 0)
{
printf("Could not create shm object. %s\n", strerror(errno));
exit( 0 ) ;
}
int iTotalByte = sizeof(datafeed)*ARRAYSIZE ;
ftruncate(shm_fd, iTotalByte );
conf = (datafeed*) mmap(0, iTotalByte , (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0) ;
if(conf == MAP_FAILED)
{
printf(" mmap error ....\n") ;
exit( 0 ) ;
}
This source will create a shared memory THE_FILE
used for IPC in /dev/shm/
, many processes can do IPC through THE_FILE
shared memory.
I wonder if there exist a method that I can mmap shared memory to
/dev/shm/
and it is huge page at the same time ?! means I like
this hugepages memory to be used for IPC among processes,
not just used for threads in the same process.
Edit :
https://lwn.net/Articles/375098/
http://lxr.free-electrons.com/source/Documentation/vm/hugetlbpage.txt?v=2.6.32
have the sample code work as expected .
https://lwn.net/Articles/374424/
https://lwn.net/Articles/375096/
https://lwn.net/Articles/376606/
https://lwn.net/Articles/378641/
https://lwn.net/Articles/379748/
also help a lot for understanding huge pages .