I have an application (game) that runs on JVM.
The game's update logic (which runs 60 times/s) finishes with about 25% used of it's "time-slice" (1/60s), then sleeps off the remaining 75%. But when the GC collector gets to run, it goes up to 75-200% and stays there for the rest of the execution.
the game uses about 70Mb of heap and grows about 1-2mb/s. When GC is run it goes back to 70Mb, so there are no true memory leaks. I will try to lower this number in the future, but it shouldn't be a problem in this scope.
I'm using JVM 8 with no runtime arguments or flags, not sure which GC that will give me.
I've tried setting the heap to different sizes, but it does not affect this phenomena.
I have two theories as to why this may be:
the GC unintentionally fragments my heap in a way that causes cache trashing in the update loop. I've got logic that benefits greatly from data proximity as it loops through it and updates it. Could it be that it shuffles some data to the old area while keeping some in the young (the nursery)?
the sudden GC processing triggers my OS, making it realize that my main update tread doesn't need as much CPU resources as it currently has, lowering its priority. (However, the phenomenon persists even if I skip the thread.sleep() to sleep off unused CPU usage.
What do you think. Are my theories plausible, can anything be done about them, or do I need to switch to a C-language? My knowledge of GC's is limited.
P.S. As a side note, generally the update() finishes at 75% post GC. It's when using VSync when I get numbers like 200%.