4

I’m reading the Google’s TCMalloc source code (the Windows porting).

int getpagesize() 
{
    static int pagesize = 0;
    if (pagesize == 0) 
    {
      SYSTEM_INFO system_info;
      GetSystemInfo(&system_info);
      pagesize = std::max(system_info.dwPageSize, system_info.dwAllocationGranularity);
    }
    return pagesize;
 }

As you can se in the code snippet above pagesize(that is the unit of allocation) is calculated as the max between dwPageSize and dwAllocationGranularity. What I mean to know is the kind of relationship between these two values: is it necessary to calculate the value in the way here upside explicated? And are there any situations in which dwPageSize could be greater than dwAllocationGranularity?

fitzbutz
  • 956
  • 15
  • 33
  • An unrelated note - use jemalloc, it outperforms tcmalloc in every way. – rustyx Sep 12 '16 at 20:53
  • Having an allocation granularity smaller than a page wouldn't be very sensible as far as I can see, but as far as I can tell it hasn't been officially ruled out. Presumably Google are just being cautious here. – Harry Johnston Sep 12 '16 at 21:38
  • 1
    The programmer that wrote this does not understand what "page size" means. There is no relationship, other than that the granularity must always be an integer multiple of the page size and can never be smaller. Granularity is a simple counter-measure against address space fragmentation. It has been 64KB forever. It is *not* a guarantee that all pages in the allocation have the same protection attributes, see [this post](http://stackoverflow.com/a/19466079/17034). – Hans Passant Sep 13 '16 at 00:16
  • @HansPassant I don't know who wrote that, but this being code from the windows port it's IMO very likely that this name was chosen to reflect the function one can find on Linux and BSD systems. That doesn't make it right ofc, refactoring to a common appropriate name (get_suitable_allocation_size?) would've been better. – Daniel Jour Sep 13 '16 at 05:20

1 Answers1

2

Disclaimer: This answer is not based on any documentation but only on my interpretation of these constants.

I assume that the page size is correctly reported. I assume the allocation granularity refers to the granularity of the OS memory allocation interface.

There are these two cases to consider:

  • the allocation granularity is greater than the page size. Allocating a memory block of the size of a page would then lead to an actually larger allocation of resources, thus it should be prevented.

  • the allocation granularity is less than the page size. Allocating a memory block of the size of the allocation granularity would still lead to a whole page being allocated/mapped, thus it should be prevented.

Basically both cases would cause the OS to allocate more memory than requested. By using the maximum this can be avoided, such that the (user space) allocation code can be (relatively) certain about its actual memory usage.

Daniel Jour
  • 15,896
  • 2
  • 36
  • 63
  • 2
    Your reasoning is pretty correct. Documentation (https://msdn.microsoft.com/en-us/library/windows/desktop/ms724958(v=vs.85).aspx) for the **SYSTEM_INFO** data structure tells that **dwPageSize** is the page size and the granularity of page protection and commitment. This is the page size used by the **VirtualAlloc** function. Then **dwAllocationGranularity** is the granularity for the starting address at which virtual memory can be allocated. My doubt is if this two values are kind of rigidly system defined or more flexible and customizable. – fitzbutz Sep 12 '16 at 20:16