4

I have the following huge page create source code in linux:

int iTotalByte = sizeof(datafeed)* ARRAYSIZE ;
conf =  (datafeed*) mmap(0, iTotalByte , (PROT_READ | PROT_WRITE), MAP_PRIVATE|MAP_ANONYMOUS|MAP_POPULATE|MAP_HUGETLB , -1  , 0) ;
if(conf  == MAP_FAILED)
{
    printf(" mmap error ....\n")  ;
    exit( 0 ) ;
} 

and it works well, numastat -m will see the huge page table how many MB the application create.

The following is the source I create shared memory used for IPC:

int shm_fd;
if((shm_fd = shm_open(THE_FILE, (O_CREAT | O_EXCL | O_RDWR),
                   (S_IREAD | S_IWRITE))) > 0 ) {
    ; /* We are the first instance */
}
else if((shm_fd = shm_open(THE_FILE, (O_CREAT | O_RDWR),
                    (S_IREAD | S_IWRITE))) < 0)
{
    printf("Could not create shm object. %s\n", strerror(errno));
    exit( 0 ) ;
}
int iTotalByte = sizeof(datafeed)*ARRAYSIZE ;
ftruncate(shm_fd, iTotalByte );
conf =  (datafeed*) mmap(0, iTotalByte , (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0) ;
if(conf  == MAP_FAILED)
{
    printf(" mmap error ....\n")  ;
    exit( 0 ) ;
}

This source will create a shared memory THE_FILE used for IPC in /dev/shm/, many processes can do IPC through THE_FILE shared memory.

I wonder if there exist a method that I can mmap shared memory to /dev/shm/ and it is huge page at the same time ?! means I like this hugepages memory to be used for IPC among processes, not just used for threads in the same process.

Edit :

https://lwn.net/Articles/375098/

http://lxr.free-electrons.com/source/Documentation/vm/hugetlbpage.txt?v=2.6.32

have the sample code work as expected .

https://lwn.net/Articles/374424/

https://lwn.net/Articles/375096/

https://lwn.net/Articles/376606/

https://lwn.net/Articles/378641/

https://lwn.net/Articles/379748/

also help a lot for understanding huge pages .

chicks
  • 2,393
  • 3
  • 24
  • 40
barfatchen
  • 1,630
  • 2
  • 24
  • 48

1 Answers1

2

Given you've done the right thing setting up pages, page sizes etc.

sysctl vm.nr_hugepages=1024

checking it with:

cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:    1024
HugePages_Free:      986
HugePages_Rsvd:      261
HugePages_Surp:        0
Hugepagesize:       2048 kB

Put something like this in your /etc/fstab:

hugetlbfs   /mnt/hugepages  hugetlbfs   gid=2000,uid=2000   0   0

Then use /mnt/hugepages in place of /dev/shm and you're doing IPC through shared memory backed by 2M superpages.

chicks
  • 2,393
  • 3
  • 24
  • 40
Hal
  • 1,061
  • 7
  • 20
  • The source code for shm_open() and shm_unlink() hardcodes the use of /dev/shm (see https://github.com/lattera/glibc/blob/master/sysdeps/posix/shm_open.c ). So it's not possible to use shm_open()/shm_unlink() with /dev/hugepages. Instead you have to use standard open()/unlink() calls to create a huge-page mapped file in memory that resides in /dev/hugepages. Depending on the permissions that the OS mounts /dev/hugepages, you may need to mount your own hugetlbfs filesystem with restricted permissions. – frankster Jul 01 '22 at 08:48