3

I'm trying to debug a program that uses too much memory. I've got the profiling running and use the run time options -s -p -h -xt. Then I use hp2ps to plot the resulting .hp file:

enter image description here

However the actual memory usage is somewhere around 650 Mb (the program is running in a docker container and I see this using docker stats.

The -s option gives me this output, which confirms the 650 Mb:

157,666,975,520 bytes allocated in the heap
 311,897,256 bytes copied during GC
 202,383,488 bytes maximum residency (609 sample(s))
  64,558,856 bytes maximum slop
         635 MB total memory in use (200 MB lost due to fragmentation)

                                 Tot time (elapsed)  Avg pause  Max pause
Gen  0     301713 colls,     0 par    1.932s   1.803s     0.0000s    0.0022s
Gen  1       609 colls,     0 par    2.743s   2.694s     0.0044s    0.0104s

TASKS: 5 (1 bound, 4 peak workers (4 total), using -N1)

SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)

INIT    time    0.001s  (  0.001s elapsed)
MUT     time   64.448s  ( 84.663s elapsed)
GC      time    4.499s  (  4.320s elapsed)
RP      time    0.000s  (  0.000s elapsed)
PROF    time    0.176s  (  0.177s elapsed)
EXIT    time    0.001s  (  0.001s elapsed)
Total   time   69.137s  ( 88.985s elapsed)

Alloc rate    2,446,431,692 bytes per MUT second

Productivity  93.2% of total user, 72.4% of total elapsed

gc_alloc_block_sync: 0
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0

Reading this post it seems like a similar problem was resolved using the options -xt -hy, but that didn't work for me (and gave me even less information in the plot). I've also tried solutions suggested in this, this and this question.

I suspect that a function using ByteString is responsible because it uses pinned memory:

Any ideas?

Community
  • 1
  • 1
jorgen
  • 3,425
  • 4
  • 31
  • 53
  • 2
    The profile says `202,383,488 bytes maximum residency` which means your program did not use more than 202MB of memory within its lifetime at any given instant (that little spike all the way at the end - seems about right to me). `635 MB total memory in use` again refers to a maximum (not a total!!) - specifically, the maximum amount of memory the OS allocated to the program. The .hp graph only measure the memory which the GC is aware of - if you GC'ed some memory but the OS did not reclaim it yet, you would see the higher memory usage in e.g. `docker stats`. – user2407038 Oct 27 '16 at 14:24
  • 1
    The [ghc user guide](https://downloads.haskell.org/~ghc/6.12.1/docs/html/users_guide/prof-heap.html#mem-residency) gives several explanations as to why residency reported by OS tools will differ from that of the GHC profiler. tl;dr don't worry about the disrepency, this is the profiler doing it's job (trying to eliminate sources of memory allocation which you do not control and cannot possible affect in your program, i.e. the garbage collecter, the runtime itself) – user2407038 Oct 27 '16 at 14:33
  • 2
    I'm not sure I understand what the problem is here. Is it that 202 ≠ 635? Those numbers measure different things, and will normally be different. For more discussion see this recent question: http://stackoverflow.com/questions/40044364/making-sense-from-ghc-profiler – Reid Barton Oct 27 '16 at 15:27
  • @ReidBarton You're right I had mixed up the two quantities. There is also the amount of memory measured from the docker stats - around 650 Mb as well coincidentally - but user2407038 had some comments on why that one is larger. – jorgen Oct 27 '16 at 16:38

0 Answers0