0

I am having a Cassandra container which generates a hprof files if size 2-3 GB.

Preventing Cassandra from dumping hprof files This link I followed but didn't help.Still, files are created and consume lots of space. I need to get rid of this hprof file.

Ajay Gupta
  • 127
  • 3
  • 13
  • 1
    I think the bigger question, is why are your Cassandra nodes crashing? Fix that, and then it shouldn't generate those files in the first place. – Aaron Apr 25 '18 at 19:18

1 Answers1

1

If its an OOM error comment out

# Enable heap-dump if there's an OOM
#-XX:+HeapDumpOnOutOfMemoryError

in jvm.options or in older versions in cassandra-env.sh

#JVM_OPTS="$JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError"

It's worth investigating why your instances is OOMing, it shouldn't do that, most likely you need a larger heap size for your workload but there may be data modeling issues too.

If its a JVM crash I think your best bet is something like -XX:HeapDumpPath=/dev/null

Chris Lohfink
  • 16,150
  • 1
  • 29
  • 38
  • Hi Chris Thanks for your response but as I told I tried exactly what you told meaning this one but still hprof file is created . #JVM_OPTS="$JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError" – Ajay Gupta Apr 25 '18 at 19:49
  • Will it be good to write a script that delete the file automatically . -XX:HeapDumpPath=/dev/null I have not tried this. – Ajay Gupta Apr 25 '18 at 19:51
  • if file is dumped to /dev/null you wont need to delete. I would strongly recommend finding out why its crashing, then you wont have dump file either. – Chris Lohfink Apr 25 '18 at 20:56