0

I'm trying to run some code I didn't write, that needs a large chunk (~1GB) of contiguous memory. I'm trying it on two different hardware configurations, but with the same linux binary. It runs on one system but gives an error "Cannot allocate memory" on the other.

uint64_t alloc_flags = MAP_PRIVATE | MAP_POPULATE | MAP_ANONYMOUS | MAP_HUGETLB | (30 << MAP_HUGE_SHIFT) 

mem->buffer = (char *)mmap(NULL, mem->size, PROT_READ | PROT_WRITE,
                  alloc_flags, mem->fd, 0);
       if (mem->buffer == MAP_FAILED) {
       perror("[ERROR] - mmap() failed with");
       exit(1);
   }

Any ideas of what might be the problem, or what to look at?

/proc/meminfo looks about the same on both systems.

I tried without success: echo 20 > /proc/sys/vm/nr_hugepages

EDIT: both systems /sys/kernel/mm/hugepages/ have: "hugepages-1048576kB hugepages-2048kB"

.. /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages reveals 1 on the successful system, and 0 on the failing system!

kw1
  • 3
  • 3
  • You didn't post the actual number of free huge pages - `/sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages` –  May 28 '20 at 22:06
  • Also try to get the actual failure reason by running your program using `strace` or through other means (you need `errno` value). –  May 28 '20 at 22:12
  • more information added. perror prints the errno "Cannot allocate memory" which is 12 – kw1 May 30 '20 at 00:08

2 Answers2

4

Having plenty of memory is not enough. Hardware requires each page be contiguous range of physical memory aligned by page size.

If your systems memory is fragmented and doesn't have enough contiguous regions, the allocation will fail.

You can ask kernel to reserve some amount of memory at boot time through kernel parameters:

hugepagesz=1G hugepages=2

This should allocate 2 1GB pages. Note that your CPU needs to have pdpe1gb flag set:

grep pdpe1gb /proc/cpuinfo

This should not be a problem since all CPUs since Westmere (released in 2010).

  • I can try this. Is there a way to set a number of 1GB pages instead of 2MB ones? As it appears the failing system simply doesn't have any free 1GB pages. – kw1 Jun 02 '20 at 21:51
  • @kw1 Check [this](https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html) - info on kernel parameters. Specifically `hugepagesz` and `hugepages`. –  Jun 02 '20 at 22:03
  • this worked! I added these to grub.cfg and now there are 2 free 1G huge pages to use. – kw1 Jun 03 '20 at 22:45
2

Reading from mmap(2), it seems like MAP_HUGE_1GB (that is your (30 << MAP_HUGE_SHIFT) flag) is not supported everywhere:

The range of huge page sizes that are supported by the system can be discovered by listing the subdirectories in /sys/kernel/mm/hugepages

What's your ls /sys/kernel/mm/hugepages/ output?

Federico
  • 221
  • 3
  • 11