0

Background: I am working on a search engine, and develop a new feature which will create a new thread pool (32 threads). The queries which meet some rules will be executed by the new thread pool. At the same time, the old thread pool (also has 32 threads) may still works and will execute other queries.

Issue: We are using Jemalloc for memory allocation. When we enable the new feature and the new thread pool begin to work, the memory consumed by Jemalloc increase from 80GB to 95GB in 4~5 houres, then it will come down in 2 hours. I have looked the statistic number of Jemalloc, the increased memory all come from "stats.mapped"(allocated by Jemalloc). "stats.active" and "stats.allocated"(consumed by our service) remain unchanged, which means the increased memory may come from memory fragmentation.

Here is the definitions of "stats.mapped", "stats.allocated" and "stats.active":

  1. JeMalloc Mapped Bytes : Total number of bytes in active chunks (default is 4MB per chunk) mapped by the allocator (Jemalloc). This is a multiple of the chunk size, and is larger than JeMalloc Active Bytes. This does not include inactive chunks, even those that contain unused dirty pages, which means that there is no strict ordering between this and stats.resident.
  2. JeMalloc Active Bytes: Total number of bytes in active pages (default is 4 KB per page) allocated by the application (our service). This is a multiple of the page size, and greater than or equal to stats.allocated. This does not include stats.arenas..pdirty, nor pages entirely devoted to allocator metadata.
  3. JeMalloc Allocated Bytes: Total number of bytes allocated by the application (Our service).

Since the memory increase is too large (from 80GB to 95GB), we want to mitigate the memory influence when enable our feature, do you have any suggestions about the above issue (memory fragmentation of Jemalloc)? Thanks!

I have tried to disable tcache, but memory still increase when enable the new feature.

Graph of memory changing

Community
  • 1
  • 1
Shuai
  • 21
  • 6

1 Answers1

0

You may have some arenas that are not being heavily utilized, such that fragmentation is very high for some arenas, and acceptable for others. Try reducing the number of arenas via the narenas option, such that no arenas are left idle during steady state operation.

Jason Evans
  • 116
  • 3
  • Thanks Jason. I will try with the narenas option. BTW, we are using version 3.3.1, could we set the options values such as "opt.lg_chunk" and "opt.narenas" using je_mallctl function in my code? Since i have tried to set "opt.lg_chunk" using je_mallctl, but seems it didn't take effect when i get and print this value again... – Shuai Feb 23 '17 at 09:58