I am currently using DBCP for connection pooling since it is more or less provided with tomcat and easy to setup. I am thinking about migrating to Glassfish or Jetty and haven't yet determined which connection pool provider I will use.
I am currently running JBoss Seam with Hibernate and have been getting decent performance with the configuration so far.
I've seen a few articles saying c3p0 is better and some saying proxool is better, and then some saying they're all in a stale state so dbcp is the best choice since it is more maintained and has documentation.
I ran JMeter tests tonight for some grins and found that when I have 10 concurrent users hitting the site in any random order with no delay between page requests that dbcp cannot get a connection and blows up. I'm disappointed with the error, but happy with the performance otherwise. I think I can improve the performance a little bit with 2nd level caching.
Questions:
If I'm not using EJB or OSGI, is Jetty 7 the clear winner over Glassfish and tomcat 6? It is certainly much smaller and thus should consume fewer resources out of the box. I am only deploying 1 web application at this point, I will deploy more later.
What connection pool do you recommend for a simple application server? I don't anticipate having 10 concurrent users with no delay between requests, but it's nice to know what the application can handle before it blows up and what it does when it hits that point.
Lastly, the cannot get connection error usually indicates a query is running a little too long and thus that connection cannot be returned to the pool. Aside from ensuring all queries are running efficiently, what other areas would you check?
The exception:
00:38:38,886 WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: null
00:38:38,898 ERROR [JDBCExceptionReporter] Already closed.
00:38:46,823 INFO [DefaultLoadEventListener] Error performing load command
org.hibernate.SessionException: Session is closed!
My average response time is 300ms, minimum 100ms, maximum 5s, and standard deviation of 300ms. This is on a Linode base server (360 MB ram, simplest package).
Any other comments?
Thanks,
Walter