7

We're running Tomcat 6 on an Ubuntu 2GB slice on slicehost.com

The JSP application fwiw is Open Clinica 3.1 I implemented SSL pretty much by the book as you can see:

   <Connector port="8443"  
           scheme="https"   SSLEnabled="true"
           keystorePass="XXXXX" keystoreFile="XXXXX" 
    maxKeepAliveRequests="0"
    sessionCacheSize="0" sessionTimeout="0"  compression="on"   maxThreads="500"
    clientAuth="false"  sslProtocol="TLS" />

The problem is that the Open Clinica Java application performs a large number of HTTP requests to build a page - using the Chrome developer tools, I can see between 70-80 requests for a typical page.

When you add SSL hand shaking to each request, the additional network latency just kills the application response time. FWIW - the client users are located in Israel, Europe and the US - so the option of running a local server next to the users is not really feasible. I am aware that since slicehose is in the US - the network latency to Israel is poor but I feel that since the HTTP performance of the server was acceptable to good - that we should be able to do better.

In an attempt to minimize the ssl handshakes - I defined unlimited sessionCacheSize and SessionTimeout as can be seen in the above connector definition

However, when I run ssldump on the client side, I still see lots of handshaking going on which seems to suggest that Tomcat is in effect ignoring these parameters

The server is not stressed - with 5 simultaneous users, there is about 100MB free memory and little to no swapping.

user9517
  • 115,471
  • 20
  • 215
  • 297
Danny Lieberman
  • 205
  • 2
  • 7

3 Answers3

5

SSL handshaking on each resource is a telltale sign that HTTP's keep-alive functionality isn't working.

With keep-alive, a single SSL handshake is done for a TCP connection, then multiple resources can be requested via that single connection. Modern browsers like to open more than one TCP connection to avoid bottlenecks on slow-loading resources, so you'll still see multiple handshakes, but certainly fewer than with keep-alive off.

maxKeepAliveRequests="0" is turning off keep-alive, I believe (I actually can't find documentation on what 0 does; 1 disables keep-alive and -1 sets no limit - I'm assuming 0 is also an effective disable).

If you intended to disable keep-alive, I'd recommend reconsidering; if you intended to set it to unlimited, change that option to -1.

Shane Madden
  • 114,520
  • 13
  • 181
  • 251
  • On Apache HTTPd, setting maxKeepAliveRequests to 0 sets unlimited keepalives but I agree with you that on tomcat, setting it to 0 may effectively be disabling them. – mahnsc Aug 14 '11 at 21:20
  • @mahnc Good call. http://tomcat.apache.org/tomcat-6.0-doc/config/http.html was specific about setting maxKeepAliveRequests to -1 for unlimited. My bad for confusing with the Apache settings. The documentation is unclear on what value 0 means but ssldump seems to indicate that it turns it off. – Danny Lieberman Aug 15 '11 at 05:29
  • @Danny Has setting it to -1 resolved the excessive handshaking? – Shane Madden Aug 15 '11 at 14:55
  • @mahnc Yes. I've been monitoring the server today and setting to -1 does the trick. Users report improved user experience too. Thanks for f/u. Looking at the Tomcat6 documentation - I would say it needs some serious work ;-) – Danny Lieberman Aug 15 '11 at 14:59
  • @Danny Great - if that's resolved the issue, go ahead and click the check mark at the top left of the answer to accept it. – Shane Madden Aug 15 '11 at 15:25
  • @mahnc. Yep - users seem happy. One more thing - tangential but still important in this sort of scenario is the amount of logging the application is doing - going thru logs I discovered that the Web app was logging every transaction - and I reduced log levels to errors only. I know from experience that on DB servers like mysql and Postgresql - the detailed logging just hoses performance. So - being critical about performance issues is a good thing I reckon. ;-) – Danny Lieberman Aug 15 '11 at 18:47
0

I would suggest using an Apache reverse proxy with the AJP connector on the front end. Put the SSL in Apache, and connect to Tomcat in clear-text over a private link (e.g., localhost).

Use Apache for what it's good at (web server with lots of fancy options) and Tomcat for what it's good at (Java applications).

bahamat
  • 6,263
  • 24
  • 28
  • 1
    - I agree in general but , we are using a single server right now running Tomcat 6. Logically, there is no advantage to adding another proxy layer on the same server - even if we assume that Apache with Open SSL is faster than Tomcat with Java SSL - the bottleneck is not the server but the network latency caused by the number of SSL handshakes made by remote client browsers. It seems to me that since handshakes are the root cause of the problem, we need an efficient way of caching the SSL client requests and not re handshaking every http request – Danny Lieberman Aug 15 '11 at 05:35
  • If we assume that apache/openssl performs better than tomcat/jsse (and that may only be true at high loads), we do have the option of going native with tomcat so that we can use openssl there any way. However, changing the MaxKeepAliveRequest setting from 0 to something like 500 (apache's default) seems like a good start. Let us know how it worked out? – mahnsc Aug 15 '11 at 12:01
  • @mahnc I set MaxKeepAliveRequest="-1" (unlimited) since the machine is not stressed out on CPU or memory resources and since the Web app performs 70-80 http requests per page we're going to need a high keep alive limit anyhow as we ramp up more users on the system. Granted I will have to keep on eye on the sys stats. – Danny Lieberman Aug 15 '11 at 15:03
0

Keep-alive is obviously the biggest gain you will have but if you still need to reduce latency you should check that you are using APR native connector.

http://tomcat.apache.org/tomcat-6.0-doc/config/http.html#Connector Comparison

Ochoto
  • 1,166
  • 7
  • 12