3

I'm continuing development of my program using IxSet and I'm curious if I'm doing something wrong (or it can be optimized). Currently it consumes way too much memory than I believe it should.

Program is here: https://bitbucket.org/k_bx/duplicates Profiling result is here: https://gist.github.com/4602235

p.s.: please, someone add "ixset" tag, since I can't create one.

UPDATE:

Memory profiling with -h: http://img-fotki.yandex.ru/get/6442/72443267.2/0_9d04d_4be1cd9f_orig

UPDATE 2:

Neat memory profiling view for same -h file: http://heap.ezyang.com/view/c1781ec5e53b00d30a9f8cd02f0b8a5e777674c9#form

Konstantine Rybnikov
  • 2,457
  • 1
  • 22
  • 29

1 Answers1

1

You're just using vanilla heap profiling, which doesn't necessarily capture data structure usage. As you point out, it breaks down heap by functions from your code. There are several options you can pass the profiler to get what you want (from the ghc guide: http://www.haskell.org/ghc/docs/latest/html/users_guide/prof-heap.html#rts-options-heap-prof)

-hc (can be shortened to -h). Breaks down the graph by the cost-centre stack which produced the data.

-hm Break down the live heap by the module containing the code which produced the data.

-hd Breaks down the graph by closure description. For actual data, the description is just the constructor name, for other closures it is a compiler-generated string identifying the closure.

-hy Breaks down the graph by type. For closures which have function type or unknown/polymorphic type, the string will represent an approximation to the actual type.

-hr Break down the graph by retainer set.

-hb Break down the graph by biography. Biographical profiling is described in more detail below

hm, hd, and hr will probably be of most use to you. You also can, with some thought, get some information about strictness properties by using hb.

sclv
  • 38,665
  • 7
  • 99
  • 204