1

I'm using Elastic APM, and want to find out how long the garbage collector has been running during a period of time. This is to understand if the application is running out of memory, which seems more accurate than just checking heap used, as garbage collection could trigger when heap space is limited, and then free up a large amount.

Elastic APM will track jvm.gc.time, which the Elastic site defines as:

The approximate accumulated collection elapsed time in milliseconds. Source

I assumed this meant how much time has been spent garbage collecting since the application started. My plan was to read this value periodically, and determine how much of the time interval was spent garbage collecting.

When I read this value two different times, it turns out the second, and later reading, is actually lower than the first.

First Reading

  • Mon Mar 23 14:27:40 CDT 2020
  • jvm.gc.time = 2384

Second Reading

  • Mon Mar 23 14:30:41 CDT 2020
  • jvm.gc.time = 2292

Can anyone help me understand what jvm.gc.time captures?

mnd
  • 2,709
  • 3
  • 27
  • 48

1 Answers1

2

These metrics come directly from java.lang.management.GarbageCollectorMXBean. The value of the jvm.gc.time metric is taken from GarbageCollectorMXBean.getCollectionTime, which is indeed accumulating since the process started.

Assuming you're looking at metrics from a single JVM, there are a couple of possible reasons for why the value would appear to have gone backwards:

  1. The process restarted.
  2. The values are for two different GC "memory managers" (e.g. G1 Young Generation, G1 Old Generation)

If the process had restarted (which I expect you would know about anyway), the metrics documents in Elasticsearch would have different values for the field agent.ephemeral_id.

The more likely answer is that you're seeing values for two different memory managers/GC generations, in which case the metrics documents in Elasticsearch would have different values for the field labels.name.

enter image description here

axw
  • 6,908
  • 1
  • 24
  • 14