0

The JVM arguments are as follows: -Xms20g -Xmx20g -XX:MaxGCPauseMillis=10000 -XX:G1ReservePercent=30 -Duser.timezone=UTC

The only thing in logs are

  • Pause Young (Normal) (G1 Evacuation Pause)

    Pause Remark

    Pause Young (Prepare Mixed) (G1 Evacuation Pause)

    Pause Young (Mixed) (G1 Evacuation Pause)

    Pause Young (Concurrent Start)

....

But nothing like Full GC. Not even once.

Memory usage is above 65 percent.

At what levels of memory consumption can we expect a Full GC ?

  • 3
    The developers spent a lot of efforts at avoiding Full GC as much as possible. Why aren’t you happy with their success? What problem do you want to solve? – Holger Sep 30 '21 at 07:28
  • The gap was in my understanding that Full GC is/should be avoided as much as possible. I was trying to understand GC logs but could see a single Full GC reference and that made me curious. Thanks – Vijay Kumar Chauhan Sep 30 '21 at 13:36
  • Hey @VijayKumarChauhan, so what conclusion did you come up with for this.We are facing the same issue where our memory consumption does not come down after reaching 80% .But when we do a inspectHeap , an full GC is triggerred and memory is cleaned up – Amol Kshirsagar Oct 09 '22 at 21:00

1 Answers1

1

A full GC is will be triggered when your objects become long lived (ie. are promoted from the young generations to the old). And then it will be triggered only if there is a need to because of memory pressure.

So what you would need to do is store a lot of objects in a long lived structure like a HashMap. A huge HashMap. Then let it sit for a long time and then attempt to allocate large objects.

Also, most GC algorithms try to avoid Full GC as much as possible. So using an older algorithm like ConcurrentMarkAndSweep may make it easier to trigger. Getting it to happen with the G1 collector is in theory possible but won't be easy.

For the G1 collector allocating many objects that survive collections and then trying to allocate very large objects may trigger it. But it isn't going to be easy.

I've seen them in the wild with a huge heap. It's probably easier to reproduce with 16+ GB of RAM than with a small heap (I see you are using 20 GB which is good).

AminM
  • 822
  • 4
  • 11
  • When we say "long lived", what timings are we talking about ? – Vijay Kumar Chauhan Sep 29 '21 at 16:23
  • 1
    It has to survive several collections. The young generations will classically use a copy collector, while the old generation (if you have ConcurrentMarkAndSweep set) will use the Mark and Sweep algorithm. So to simulate this I'd leave it running a while while it adds/removes things continuously.. you'd want some of the data to survive collections and make it to the old part of memory. – AminM Sep 29 '21 at 16:27
  • ConcurrentMarkAndSweep ?? we have G1 here. Does the concept of the mark and sweep still holds true in G1 GC ? – Vijay Kumar Chauhan Sep 29 '21 at 16:29
  • 1
    Since you're doing this with the G1 Collector I think you may be able to force this by allocating huge objects. But good luck G1 was built to avoid this very scenario. – AminM Sep 29 '21 at 16:30
  • 1
    I didn't notice you're using G1. With G1 this is going to be very hard. – AminM Sep 29 '21 at 16:31