In my project, I use caffeine cache for caching. The configuration is as follows.
cache = Caffeine.newBuilder()
.expireAfterWrite(6, TimeUnit.MINUTES)
.maximumSize(50_0000.get())
.recordStats()
.build();
The cache is about 600m.
Usage of read:
ReadOnlyHashTable v = itemPropsCache.getIfPresent(key);
If read miss, then load from redis
// load from redis ...
ReadOnlyHashTable table = new ReadOnlyHashTable('${redisVlue}');
itemPropsCache.setCache(key, table);
After checking the GC logs and JVM heap dump analysis, I found that the old age was growing consistently, resulting in full GC.
The following is my guess: because I set a 6-minute expiration, my cache will produce about 500m of garbage every six minutes in the old generation (because It will be promoted to the old age after reaching MaxTenuringThreshold through the younger generation, and then expire for cleaning).
I have tried CMS and G1 garbage collector, and it seems that they can not reach the ideal state.
CMS:
-server
-Xmx6g
-Xms6g
-XX:NewRatio=1
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:MaxTenuringThreshold=15
-XX:SurvivorRatio=3
-XX:+ParallelRefProcEnabled
-XX:+CMSParallelRemarkEnabled
-XX:+UseCMSCompactAtFullCollection
-XX:+HeapDumpOnOutOfMemoryError
-XX:MetaspaceSize=512m
-XX:MaxMetaspaceSize=512m
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-Xloggc:/export/Logs/gc.log
-XX:+PrintTenuringDistribution
G1:
-server
-Xmx6g
-Xms6g
-XX:+UseG1GC
-XX:+HeapDumpOnOutOfMemoryError
-XX:MetaspaceSize=512m
-XX:MaxMetaspaceSize=512m
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-Xloggc:/export/Logs/gc.log
-XX:+PrintTenuringDistribution
Is there any solution without adjusting the heap size and cache size?
Thank you!