3

What I did:

  1. Enable huge page with root (my system supports 1MB huge page)

    $ echo 20 > /proc/sys/vm/nr_hugepages
    
  2. Mount huge page filesystem to /mnt/hugepages

    $ mount -t hugetlbfs nodev /mnt/hugepages
    
  3. Create a file in huge page filesystem

    $ touch /mnt/hugepages/hello
    
  4. Then map a huge page using mmap to address 0 as shown in the code below

    #define FILE_NAME "/mnt/hugepages/hello"
    #define PROTECTION (PROT_READ | PROT_WRITE) // page flag
    #define LENGTH (1024*1024*1024)     // huge page size
    #define FLAGS (MAP_SHARED)      //page flag
    #define ADDR (void *) (0x0UL)   //starting address of the page
    
    fd = open(FILE_NAME, O_CREAT | O_RDWR, 0755);    
    if (fd < 0) {                    // 
          perror("Open failed");
          exit(1);
     }
    
    // allocate a buffer using huge pages
    buf = mmap(ADDR, LENGTH, PROTECTION, FLAGS, fd, 0);
    if (buf == MAP_FAILED) {
            perror("mmap");
            unlink(FILE_NAME);
            exit(1);
     }
    

The program outputs:

mmap: Cannot allocate memory
chicks
  • 2,393
  • 3
  • 24
  • 40
drdot
  • 3,215
  • 9
  • 46
  • 81
  • Add `MAP_HUGETLB` to flags and try – Santosh A Mar 03 '15 at 07:59
  • also can you try `ftruncate(2)` before `mmap(2)`? – holgac Mar 03 '15 at 08:01
  • @SantoshA, (MAP_SHARED | MAP_HUGETLB) gives the same error – drdot Mar 03 '15 at 08:07
  • @AntoJurković, thank you for pointing out...I am correcting it. I wonder if it is because my huge page is not enough since I enable 20 huge pages, each of them is 1MB and I am using mmap to map 1GB. Does that matter or the system automatically pick smaller size pages after using up all huge pages? – drdot Mar 03 '15 at 08:08
  • @revani, could you elaborate? Truncate to how big? – drdot Mar 03 '15 at 08:10
  • 1
    Latest update: I enable 1024 huge pages. Now it works with the 1GB mmap. MAP_HUGETLB does not affect whether the mmap will give an error or not. But I am not sure if the pages are still huge page without this flag enabled. So if someone can point out the number of huge page enabled problem and explain the MAP_HUGETLB, I am happy to take it as the answers. – drdot Mar 03 '15 at 08:16
  • @dannycrane `ftruncate(2)` is a poorly named function that *changes* (not only decreases) the size of the file. You should truncate to make it at least the size you're going to `mmap(2)`, which is in your case, `LENGTH`. I guess you already did that, since you mentioned it's working now. – holgac Mar 03 '15 at 14:25

3 Answers3

5

Linux only supports huge pages for private anonymous mappings (not backed by a file). I.e. you can only enable huge tables for stack, data and heap.


Nowadays, there is hugeadm to configure the system huge page pools, no need to fiddle with /proc and mount. And hugectl to use huge pages for code and data.

Maxim Egorushkin
  • 131,725
  • 17
  • 180
  • 271
2

It is not clear if the OP is talking about 1GB pagesize or is on ARMv7 and indeed has 1MB page sizes (subject does not match description). This answer is in reference to using 1GB pagesizes.

Anyway, if you want 1GB pagesizes you must enable it at boot time (unless your memory is exceptionally clean, as hugepages can only be allocated if you have hugepagesz contiguous free memory). To enable gigabyte hugepages add hugepagesz=1GB hugepages=n to GRUB_CMDLINE_LINUX where n is the number of 1GB pages you want to add.

You can now use 1GB hugepages using interfaces like get_huge_pages() (yah!), but you still can't allocate using shm_get/mmap (boo!). Both of these have no mechanism to specify the hugepagesz and require a work-around which is setting default_hugepagesz=1GB as an additional parameter to your kernel boot command line.

Once you have set all three parameters say goodbye to TLB faults and bask in the glory that is 1GB page sizes!... Unless you are on power and then you should be basking in the glory that is 16GB page sizes ;).

  # Script to create /hugepages mount point and enable 1GB hugepages
  # For RHEL (6) Systems!
  #
  #   MAKE SURE YOU KNOW WHAT THIS SCRIPT DOES BEFORE RUNNING!

  echo "hugetlbfs   /hugepages    hugetlbfs rw,mode=0777,pagesize=1G  0 0" \
  >> /etc/fstab
  mkdir /hugepages
  sed 's/rhgb quiet/hugepagesz=1GB default_hugepagesz=1GB hugepages=16 selinux=0/' /etc/default/grub > grub
  cp /etc/default/grub grub.old
  mv -f grub /etc/default/grub
  grub2-mkconfig  > /etc/grub2-efi.cfg

  # Now reboot
Clarus
  • 2,259
  • 16
  • 27
0

Note that you will also need to use ftruncate(2) to adjust the size of the file so that it actually holds the amount of memory you use. The mmap(2) will still work for a zero-sized file, but you'll get a SIGBUS when trying to access the memory:

Use of a mapped region can result in these signals:

...

SIGBUS Attempted access to a portion of the buffer that does not correspond to the file (for example, beyond the end of the file, including the case where another process has truncated the file).

(From mmap(2).)

To check that the area is really using huge pages, you could inspect /proc/[pid]/smaps (documented in proc(5) on Linux). Check if VmFlags contains ht for the memory area.

Edit:

Have you looked into libhugetlbfs by the way?

Ulfalizer
  • 4,664
  • 1
  • 21
  • 30