1

I am currently doing performance analysis on a server and I see an application is generating a large number of page faults/sec. After checking the page reads/sec it seems these are all soft page faults not hard page faults (no disk activity).

I then read online that most modern cpus can handle a large number of soft page faults, but nowhere can I find what a large number would be ... this app is spiking between 3000 to over 7000 page faults per second.

So, for this number of soft page faults per second do I need to worry?
Is there a noticable performance hit for this level of faults?
Can I do anything to optimize it?

Thanks in advance

Adam Fox
  • 137
  • 5

1 Answers1

1

Based on your provided facts, I think the mentioned application was doing rapid memory allocation (e.g malloc() ). It seems that it allocated a block, possibly releasing it and then allocate again. Usually memory allocator will keep freed memory block(s) in cache, but it could be that in your case, it was forced to be released.

I think there's not much you can do here, since we're dealing with application behaviour, not kernel or other aspect. However, I think the situation could be altered by using different memory allocator. Try to google "memory allocator". For example, in Linux, the default memory allocator is ptmalloc. Example of alternative is : http://goog-perftools.sourceforge.net/doc/tcmalloc.html

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
user66421
  • 26
  • 1