2

I have two REST endpoints driving some navigation in a web site. Both create nearly the same response, but one gets its data straight from the db whereas the other has to ask a search engine (solr) first to get some data and then do the db calls.

If i profile both endpoints via JProfiler i get a higher runtime (approx. 60%) for the second one (about 31ms vs. 53ms). That's as expected.

Profile result:

enter image description here

If i view the same ajax calls from the client side i get a very different picture.

  • The faster of the both calls takes about 146 ms waiting and network time
  • The slower of the both calls takes about 1.4 seconds waiting and network

enter image description here

enter image description here

Frontend timing is measured via chrome developer tools. The server is a tomcat 7.0.30 running in STS 3.2. Client and server live on the same system, db and solr are external so there should be no network latency between tomcat and the browser. As a side note: The faster response has the bigger payload (2.6 vs 4.5 kb).

I have no idea why the slower of the both calls takes about 60% more server time but in sum nearly 1000% more "frontend time".

The question is: Is there any way i can figure out where this timing differences originate?

Dirk Lachowski
  • 3,121
  • 4
  • 40
  • 66

1 Answers1

2

By default, the CPU views in JProfiler show times in the "Runnable" thread state. If a thread reads data from a socket connection or waits for some condition, that time is not included in the "Runnable" thread state.

In the upper right corner of the CPU views there is a thread state selector. If you change that to "All states", you will get times that you can compare with the wall clock times from the browser.

Ingo Kegel
  • 46,523
  • 10
  • 71
  • 102