0

I want to know if a program that shows high number (or the highest in the system ) of page faults, lets say into Task Manager or Process Explorer, that is an indication of memory fragmentation. Is there any other way to reveal this kind of problem? ( that of memory fragmentation). So, a program with huge page faults while running can be from data that is not in RAM but OS makes frequently interrupts to load from disk. A possible reason may be memory fragmantation? I want to know if this 2 things are related

Stathis Andronikos
  • 1,259
  • 2
  • 25
  • 44

2 Answers2

1

From wikipedia:

The main functions of paging are performed when a program tries to access pages that are not currently mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system must:

Determine the location of the data in secondary storage. Obtain an empty page frame in RAM to use as a container for the data. Load the requested data into the available page frame. Update the page table to refer to the new page frame. Return control to the program, transparently retrying the instruction that caused the page fault.

Thus, I would say that fragmentation normally has nothing to do with page faults. The later is an indication that the RAM memory is full and this specific program consumes much more memory than others so he has more memory in the swap area, so each time he tries to access a page that has been swapped out by the OS a page fault occurs and the OS has to load this page to RAM.

That's if you are experimenting this error with a single process. If you are observing the same issue with ALL the process this is an indication of Thrashing. In this case the amount of physical memory is not enough to accommodate all the running process so the virtual memory subsystem spends much more time in paging. Thus, the processes are not progressing because each time a page fault occurs a process loses the CPU and has to wait until the page is ready in the RAM.

Fragmentation normally happens when you your memory map contains several small chunks that could not satisfy new reservations so the process start asking for more memory in order to accommodate them. So the symptom in this case is a higher usage of memory or memory not been released to the OS even once the program has finished some specific task that was supposed to allocate dynamic memory, do some stuff and then release it.

Harry Johnston
  • 35,639
  • 6
  • 68
  • 158
rkachach
  • 16,517
  • 6
  • 42
  • 66
0

A high number of page faults tends to be caused by a high demand for resident memory. Memory fragmentation could be the underlying cause of a high demand for resident memory, but it would not be my first guess.

Maybe the problem simply needs that much resident memory.

Maybe the problem needs that much virtual memory but the algorithm is poorly designed (poor locality of access) so the demand for resident memory is higher than it ought to be.

Maybe the program is poorly coded so it uses a lot more memory than it needs.

Maybe the task's demand for resident memory is perfectly reasonably (given the available physical memory) but Microsoft's brain dead memory management algorithms are generating overwhelming page faults for no good reason.

Most page faults are "soft" faults meaning no disk activity is actually needed. The OS has taken pages away from the task without removing those pages from physical ram as a means of testing which pages the task really needs, with the longer term goal of keeping that task's "working set" from growing (with the Microsoft misuse of the term "working set"). That is all necessary and correct behavior for an OS.

But when the task needs those pages back quickly you get a soft fault in which the OS gives those pages back and takes other pages away, rather than realizing that the task needs a higher total resident ram and there is sufficient physical ram to accommodate it. I have seen many cases in which the single threaded kernel CPU time of servicing the soft faults is 90% or more of the elapsed time of a long program, while most of the machine's ram just sits unused.

JSF
  • 5,281
  • 1
  • 13
  • 20
  • Don't blame Microsoft, the page fault algorithms are perfectly fine. They page out the pages which haven't been used recently. You can't expect them to be psychic and predict which pages will be used. Your **own** memory management is a lot more important. If you mix hot and cold data on a single page, Windows cannot page out just the cold part of a page. – MSalters Oct 09 '15 at 11:54
  • @Msalters, I'm not stupid and I'm not blindly blaming Microsoft. I'm blaming Microsoft because I've studied the problem carefully looking for work arounds. I'm blaming Microsoft because the same program source code recompiled for Linux and running with the same memory access pattern is not overwhelmed by the kernel CPU time of servicing soft faults. (Not that I'm very happy with Linux memory management either, but a lot less unhappy). – JSF Oct 09 '15 at 11:58
  • 2
    That's precisely why I said you can't expect a psychic OS. Linux just happened to **guess** what pages your specific program will use. But for every memory access pattern that's better on Linux, there's one that is better on Windows. Also, if you want to hard claim memory, call `VirtualLock`., or make a soft claim with `SetProcessWorkingSetSize`. Again, the algorithms aren't psychic, so you can help by providing them with information about what will happen. – MSalters Oct 09 '15 at 12:06