4

I have a Java server running on a beefy server machine (many cores, 64Gb RAM, etc.) and I submit some workloads to it in test scenario; I submit one workload, exactly the same, 10 times in a row in each test. One one particular workload, I observe that in the middle of 10 runs, it takes much longer to complete (i.e. runs 1-2 - 10sec, 3 - 12sec, 4 - 25sec., 5 - 10sec., etc.). In yourkit profile with wall time from the server, I see no increase in IO, GC, network, or pretty much anything during the slowdown; no particular methods increase in proportion of time spent - every method is just slower, roughly in proportion. What I do see is that average CPU usage decreases (presumably because same work is spread over more time), but kernel CPU usage increases - from 0-2% on faster workloads, to 9-12% on slow one. Kernel usage crawls slowly up from the end of the previous workload, which is slightly slower, stays high, then drops between the slow and next workload (there's a pause). I cannot map this kernel CPU to any calls from yourkit. Does anyone have an idea what this can be? Or suggest further venues of investigation that might show where kernel time goes?

Sergey
  • 636
  • 6
  • 12
  • 1
    Just guessing: Swapping? There's no visible increase in IO as you're dong quite some IO anyway. A few page loads suffice to slow down everything a lot. I see it doesn't fit exactly, but checking this is simple and I can't see any other explanation. **Closers: What's wrong with this question?** Some more details may be necessary, but what exactly? I don't know and neither does probably the OP, so advice us instead of denying help. – maaartinus May 21 '15 at 23:07

0 Answers0