0

I have created both the client and server for the Java Web Service Project in eclipse. What I tried to do was -

Step 1 - Make 1000 server calls and measure the average time of each call.

Step 2 - Make 100000 server calls and measure the average time of each call.

What I see is that the average time of each call in step 2 is less than that in step 1. Can some one guide me why is that so?

Thanks, Prat

Student
  • 4,481
  • 8
  • 27
  • 32
  • Obviously the denominator 100000 is greater than 1000. though the total(1) < total(2) the response time is usually less than 1 sec. The answer is simple mathematics – Bhavik Shah Nov 01 '12 at 05:43
  • @Bhavik Shah.. As mentioned above I am calculating the avg call in each step.. Step 1 - (total_time)/ 1000 and in step 2 - (total_time)/1000000. total_time in step 2 > total_time in step 1 – Student Nov 01 '12 at 05:46

2 Answers2

1

From the experience of having conducted hundreds of load tests, I think it could be because of warm up time. Have you accounted for this ? Usually systems require more time to handle the first N calls. This could be because...

 - thread pools need to be initialized
 - database connection pools must be populated
 - classes may need to be populated into permgen for the first time
 - | insert another init action here |

Warm up time tends to even out after a few iterations, so the larger the number the better the average. Over several thousands of iterations, the 'warm up time' is no longer significant. You can get around that for small iterations by making a couple of calls for the first X seconds and giving the server time to warm up. Increase the user / thread count after the warm-up. Jmeter has a way to do this for example.

Deepak Bala
  • 11,095
  • 2
  • 38
  • 49
1

I guess JIT is the reason.

Almost all kinds of JVM will do JIT when you run the program, but it has its own condition to start JIT. Given that the JDK you use is Oracle JDK. There're two modes you can select to run your JVM, client and server, and the conditions to start JIT in the two modes are different.I guess the mode you select is server.

Depending on two counters, Invocation Counter and Back Edge Counter, JIT will compile you java bytecode to native code, which can improve your application performance.

Invocation Counter will count the invocation times of methods. When it's greater than some value, JIT start to compile the "hot" method to native code and replace the old code. Next time when you invoke the method, the program will run the native code.

Back Edge Counter will count the times in a loop. When it's greate than some value, JIT will compile the code in that loop to native and replace the old code. Because the replacement is on the invocation stack, it's also called OSR(On Stack Replacement).

You can control the Invocation Counter's "some value" by JVM parameter -XX:CompileThreshold=10000. That means after 10000 times invocation, the method will be compiled, and 10000 is the default value in the server mode.

You can control the Back Edge Counter's "some value" by JVM parameter -XX: OnStackReplacePercentage=140. There's a formula here. "some value"=(CompileThreshold * (OnStackReplacePercentage - InterpreterProfilePercentage))/100. By default, InterpreterProfilePercentage=33 and OnStackReplacePercentage=140 in the server mode. Thar is, if the running time in the loop is beyond 10700, JIT will start.

In general, I think the invocation time in step 2 triggers the JIT, so step 2 has higher performance.

caoxudong
  • 383
  • 2
  • 10