11

I ran the program with root priviledge but it keeps complaining that mmap cannot allocate memory. Code snippet is below:

#define PROTECTION (PROT_READ | PROT_WRITE)
#define LENGTH (4*1024)

#ifndef MAP_HUGETLB
#define MAP_HUGETLB 0x40000
#endif

#define ADDR (void *) (0x0UL)
#define FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB)

int main (int argc, char *argv[]){
...
  // allocate a buffer with the same size as the LLC using huge pages
  buf = mmap(ADDR, LENGTH, PROTECTION, FLAGS, 0, 0);
  if (buf == MAP_FAILED) {
    perror("mmap");
    exit(1);
  }
...}

Hardware: I have 8G RAM. Processor is ivybridge

Uname output:

Linux mymachine 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

EDIT 1: The output of perror

mmap: Cannot allocate memory

Also added one line to print errno

printf("something is wrong: %d\n", errno);

But the output is:

something is wrong: 12

EDIT 2: The huge tlb related information from /proc/meminfo

HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Jongware
  • 22,200
  • 8
  • 54
  • 100
drdot
  • 3,215
  • 9
  • 46
  • 81

3 Answers3

21

Well, as Documentation/vm/hugetlspage.txt suggested, do

echo 20 > /proc/sys/vm/nr_hugepages

solved the problem. Tested on ubuntu 14.04. Check Why I can't map memory with mmap also.

D3Hunter
  • 1,329
  • 10
  • 21
9

When you use MAP_HUGETLB, the mmap(2) call can fail (e.g. if your system does not have huge pages configured, or if some resource is exhausted), so you almost always should call mmap without MAP_HUGETLB as a fail back. Also, you should not define MAP_HUGETLB. If it is not defined (by system headers internal to <sys/mman.h>; it might be different according to architectures or kernel versions), don't use it!

// allocate a buffer with the same size as the LLC using huge pages
buf = mmap(NULL, LENGTH, PROTECTION,
#ifdef MAP_HUGETLB
           MAP_HUGETLB |
#endif
           MAP_PRIVATE | MAP_ANONYMOUS,
           0, 0);
#ifdef MAP_HUGETLB
  if (buf == MAP_FAILED) {
    // try again without huge pages:
    buf = mmap(NULL, LENGTH, PROTECTION, 
               MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
  };
#endif
if (buf == MAP_FAILED) {
   perror("mmap");
   exit(EXIT_FAILURE);
}

The kernel's Documentation/vm/hugetlspage.txt mention that huge pages are -or may be- limited (e.g. if you pass hugepages=N to the kernel, or if you do things thru /proc/ or /sys/, or if this was not configured in the kernel, etc...). So you are not sure to get them. And using huge pages for a small mapping of only 4Kbytes is a mistake (or perhaps a failure). Huge pages are worthwhile only when asking many megabytes (e.g. a gigabyte or more) and are always an optimization (e.g. you would want your application to be able to run on a kernel without them).

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • I want to ask mmap to use huge tlb. How should I make this error go away? – drdot Dec 24 '14 at 16:07
  • Why `mmap` with `MAP_HUGETLB` can fail? please give some link about that, both [mmap](http://man7.org/linux/man-pages/man2/mmap.2.html) and [hugetlbpage](https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt) did not talk about `mmap` with `MAP_HUGETLB` can fail – D3Hunter Dec 24 '14 at 16:10
  • @jujj, and Basile Starynkevitch, I changed the mapping size to 4*1024*1024 and apply the echo command provided by jujj. It is working now. But since both of you provided part of the solution, I dont know who's answer to take. But I think Basile posted the kernel document and explanation first. Note: with only the echo command, I will still get seg faults later in my program. – drdot Dec 24 '14 at 17:45
  • But I still did not understand why I cannot define MAP_HUGETLB – drdot Dec 24 '14 at 18:01
  • Because it is provided by system headers. If they don't define it, your system don't have it. – Basile Starynkevitch Dec 24 '14 at 18:26
  • By using the echo command. The mmap error goes away. How should I interpret that (mmap is using my define)? – drdot Dec 24 '14 at 18:53
  • Your `echo` is changing the configuration of the running kernel – Basile Starynkevitch Dec 24 '14 at 18:53
  • @BasileStarynkevitch, yes I understand that. So what is the drawback of defining my own MAP_HUGETLB and changing it at runtime? – drdot Dec 24 '14 at 22:16
  • @dannycrane. seg faults may not be caused by `mmap`, use `gdb` to find out whether you are dereferencing some invalid pointer or write out of array bound (`gdb` may not help you with this one). – D3Hunter Dec 25 '14 at 01:35
  • @jujj, I will take your answer since it solves the problem. I created another thread about the seg fault when changing page sizes. Hope you guys can comment on that. http://stackoverflow.com/questions/27707319/segmentation-fault-due-to-intialize-function-when-changing-the-size-of-the-hug – drdot Dec 30 '14 at 14:37
-3

A practical solution if you're surely knowing physical memory is enough:

echo 1 > /proc/sys/vm/overcommit_memory
  • 1
    This won't fix the problem. If the kernel has no huge pages available (they need to be consecutive in **physical** memory), allocating with `MAP_HUGETLB` will fail. Even if you still have a terrabyte of free memory. – cmaster - reinstate monica Jul 19 '18 at 07:16