I've been testing on Hypertable recently since attracted by its a-priori ability to handle huge amounts of time-series data. My version is 0.9.7.5, and I inserted about one million "rows" that each have about 1000 columns labeled date_time:"date" with the corresponding values such as '100.0' (this is not yet huge).
However, at every hypertable startup, and despite any (known from me) memory tuning in the hypertable.cfg file, EVEN WHEN ACCOUNTING FOR HEAP FRAGMENTATION, the hypertable process uses all my RAM and starts to swap.
Would anyone have any idea about why this is happening and maybe how to stop this very annoying behaviour?
The hypertable software pleases me a lot for its user-friendliness while being very powerful and fast, but I might be tempted to switch to the more memory-controllable cassandra if this problem finds no solution since I cannot throw RAM at it eternally.
Thanks a lot,
Nicolas