1

Reading the release note of Redhat 7.1, I read this:

Process Stack Size Increased from 8KB to 16KB
Since Red Hat Enterprise Linux 7.1, the kernel process stack size has been increased from 8KB to 16KB to help large processes that use stack space.

I know the kernel process stack are resident memory and the allocation is made when processes are created and that memory needs to be contiguous, In x86_64 with page size of 4096 bytes, the kernel will need to find 4 pages intend of 2 pages for the process stack.

This feature can be a problem when the kernel memory is fragmented? With one process kernel stack size, will be more easier to have a problem with process creation when the memory will be fragmented?

c4f4t0r
  • 1,563
  • 15
  • 24
  • Your question seems to be unclear and not really programming related. – Zan Lynx May 05 '15 at 22:07
  • is about linux kernel process stack – c4f4t0r May 05 '15 at 22:09
  • Alright. I will answer your question. Then you will see why your question has a problem. – Zan Lynx May 05 '15 at 22:10
  • Thanks, I will add something to my questions. – c4f4t0r May 05 '15 at 22:12
  • I don't see how this is programming related. It's also not a problem. The pages that make up the kernel stack don't need to be contiguous in physical RAM. They need to be contiguous in the virtual address space, but the 64-bit kernel virtual address space is huge, 128 TB, so fragmentation is never an issue. – Ross Ridge May 05 '15 at 22:27
  • @RossRidge kernel process stack is logical address 1:1 mapping, I not sure fragmentation is not an issue. – c4f4t0r May 06 '15 at 14:54

2 Answers2

1

The kernel often needs to allocate a set of one or more pages which are physically contiguous. This may be necessary while allocating buffers (required by drivers for data transfers such as DMA) or when creating a process stack.

Typically, to meet such requirements, kernel attempts to avoid fragmentation by allocating pages which are physically contiguous and additionally freed pages are merged/grouped into larger physically contiguous group of pages (if available). This is handled by the memory management sub-system and the buddy allocator. Now when your stack (8k or 16k in RHEL7) is created when the program has starts executing.

If kernel is unable to obtain or allocate a the requested set of physically contiguous pages (either 2 for 8k stack or 4 for 16k stack assuming 4k page size), then this could potentially lead to page allocation failures, order:2. (i.e 2^2=4 pages * 4K). The order depends on the size of your physically contiguous pages requested. We can observe the /proc/buddyinfo file during the time when page allocation failures occur, it could show signs of physically memory being fragmented.

askb
  • 6,501
  • 30
  • 43
0

Yes.

When memory is fragmented finding stack space can be a problem.

Zan Lynx
  • 53,022
  • 10
  • 79
  • 131