1

I am working on a Spring-MVC application where we are using HikariCP. We are using PostgreSQL as the database with 150 connections in PostgreSQL and 15 pool size for Hikari. At times, we suddenly get error that timeout occured. The error goes away in 10-20 seconds, but it slows down the entire server. Many times it's not even busy. I tried some other threads and added leak detection, but no help. Any ideas what I am doing wrong or how to fix this?

Error log :

java.sql.SQLTimeoutException: Timeout after 30000ms of waiting for a connection.
        at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:233)
        at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:183)
        at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:110)

root-context.xml :

<beans:bean id="dataSource" class="com.zaxxer.hikari.HikariDataSource"  destroy-method="close">
    <beans:property name="dataSourceClassName" value="org.postgresql.ds.PGSimpleDataSource"/>
    <beans:property name="minimumIdle" value="2"/>
   <beans:property name="maximumPoolSize" value="17" />
    <beans:property name="maxLifetime" value="300000" />
    <beans:property name="idleTimeout" value="25000" />
    <beans:property name="leakDetectionThreshold" value="3000"/>
    <beans:property name="dataSourceProperties">
        <beans:props>
            <beans:prop key="url">jdbc:postgresql://localhost:5432/DB_NAME</beans:prop>
            <beans:prop key="user">USERnamE</beans:prop>
            <beans:prop key="password">PASSWORD</beans:prop>
        </beans:props>
    </beans:property>
</beans:bean>

<!-- Hibernate 4 SessionFactory Bean definition -->
<beans:bean id="hibernate4AnnotatedSessionFactory"
            class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
    <beans:property name="dataSource" ref="dataSource"/>
    <beans:property name="packagesToScan" value="com.ourapp.spring.model"/>
    <beans:property name="hibernateProperties">
        <beans:props>
            <beans:prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQL9Dialect</beans:prop>
            <beans:prop key="hibernate.show_sql">false</beans:prop>
            <beans:prop key="hibernate.jdbc.batch_size">50</beans:prop>
            <beans:prop key="hibernate.hbm2ddl.auto">update</beans:prop>
            <beans:prop key="cache.use_second_level_cache">true</beans:prop>
            <beans:prop key="cache.use_query_cache">true</beans:prop>
            <beans:prop key="hibernate.order_updates">true</beans:prop>
            <beans:prop key="show_sql">false</beans:prop>
        </beans:props>
    </beans:property>
</beans:bean>

Any help would be nice. Thank you. :-)

Update

typical save and read :

@Repository
@Transactional
public class AppDetailsDAOImpl implements AppDetailsDAO {

    private final SessionFactory sessionFactory;
    @Autowired
    public AppDetailsDAOImpl(SessionFactory sessionFactory) {
        this.sessionFactory = sessionFactory;
    }


    @Override
    public Long saveAppDetails(AppDetails appDetails, int personid) {
        Session session = this.sessionFactory.getCurrentSession();
        Person person = (Person) session.get(Person.class, personid);
        if (person != null) {
            person.getAppDetailsSet().add(appDetails);
            appDetails.setUserApps(person);
            Long saveid = (Long) session.save(appDetails);
            session.flush();
            return saveid;
        }
        return (long) 0;
    }

    @Override
    public AppDetails getAppDetailsByAppId(Long appId) {
        Session session = this.sessionFactory.getCurrentSession();
        return (AppDetails) session.get(AppDetails.class, appId);
    }
}
We are Borg
  • 5,117
  • 17
  • 102
  • 225
  • So is the poolsize big enough for you then? If you've checked that you're not leaking connections, then you have too many / too long transactions for the pool settings. Increasing the poolsize can help, but I'm sure you've also read the HikariCP article about connection pool sizing. – Kayaman May 17 '18 at 11:38
  • @Kayaman : the problem is I have mostly short running transactions and very rarely long running transactions. I tried increasing the pool size last time from 10-15, that didn't solve the problem, and sometimes the app is idle and then when I try to access it, I get this error. – We are Borg May 17 '18 at 11:46
  • If the app is idle, then you do have a connection leak. However that shouldn't "go away in 20 seconds". Configure HikariCP to print out pool statistics and study those to see what's happening inside the pool. – Kayaman May 17 '18 at 11:51
  • @Kayaman : any idea how to do that in XML, most of the code is in Java. thanks. Yes, that's the strange part, it just slows down for 10-15 seconds and then everything is up and running. – We are Borg May 17 '18 at 11:53
  • Check the Hikari documentation for relevant parameters. Edit: ah, you'll need to take in Dropwizard/Codahale for metrics. Is that possible for you? – Kayaman May 17 '18 at 11:54
  • @rustyx : It's our test servers.. i was only one who was logged in... Didnt do a thread dump. – We are Borg May 17 '18 at 12:06
  • Catch SQLTimeoutException and do a [thread dump](https://stackoverflow.com/a/50376945/485343) – rustyx May 17 '18 at 12:42
  • Or you can do it properly with the metrics provided by HikariCP with Dropwizard. There's no reason to do jungle debugging when you have everything you need given to you on a silver platter. Use the right tool for the right job, you don't do profiling with wall clock time either. – Kayaman May 17 '18 at 13:24
  • @Kayaman : i did a bit more reading, from what I can understand, I have to manually close connections? Is that correct? Generally I only get(not open) a session from SessionFactory of hibernate and then flush the session once task is done and for long running queries I clear the session with session.clear(), isn't that enough? – We are Borg May 18 '18 at 07:20
  • No of course that's not enough. Where are you returning the connection to the pool for others to use? If you're not using something that would handle the connection/transaction for you, and you're not closing connections, then you've written a very basic elementary level connection leak. – Kayaman May 18 '18 at 07:25
  • @Kayaman : I have updated the main post to include the code which i normally use for save and read. Can you check out. I have tried to close connections, but then if there are any more DB related calls from Service layer, I get a connection already closed error. Thanks. :-) – We are Borg May 18 '18 at 07:31
  • No that's alright. You have Spring handling the connections so you don't need to do things manually and there's no connection leak. – Kayaman May 18 '18 at 07:40
  • @Kayaman : what is going wrong then? I tried increasing max pool size, didn't help. It's now set to 25. I am now trying the Metrics part, but as I am working on Spring-MVC with old XML configuration, having some difficulties integrating metrics. Any idea what I should look for? Thanks. :-) – We are Borg May 18 '18 at 07:43
  • @Kayaman : I have posted entire XMl config in this pastebin : https://pastebin.com/vnE1Ky2S . If in-case that helps. Thanks a lot. :-) – We are Borg May 18 '18 at 07:48
  • Debugging by reading configuration is really ineffective, especially if you have no idea about the root cause. You still have a pool acting weirdly (or at least that's what you're assuming, the pool is just a suspect), but you don't know *how* weirdly it's acting. If you can't get the metrics working, at least enable DEBUG logging for `com.zaxxer.hikari` and gather some basic data about the pool status. You can then look at how the pool usage grows and maybe get more information it. – Kayaman May 18 '18 at 08:26
  • To check for leaks, set the HikariCP leakDetectionThreshold to longer than your longest running transaction, and watch the log for leak messages. If you have a leak and you don’t find it and fix it, your application will eventually freeze. – brettw May 18 '18 at 11:06
  • @brettw : Did exactly that in this deployment. Waiting for a freeze to happen. What helped currently was to set hibernates connection release mode to after_statement. – We are Borg May 18 '18 at 11:07

2 Answers2

3

You run out of connections due to your application business logic. Like you said:

the problem is I have mostly short running transactions and very rarely long running transactions

Instead of increasing the pool size for short running transactions declare a separate new DataSource bean for long running transactions. This new DataSource should be backed by new HikariCP pool. It can even have an min size of 0 since the cost of establishing a new database connection should be insignificant if you are running a long running transaction e.g. a monthly report.

Separating OLTP and OLAP processing is the right approach e.g. see CQRS. You can for example have a separate OLAP database refreshed daily to handle reporting while the main OLTP database is unaffected by time consuming reporting workloads.

Karol Dowbecki
  • 43,645
  • 9
  • 78
  • 111
  • Thank you for the answer. I am trying out your suggestion by creating a new datasource. I am getting an error about it. can you please check this out : https://stackoverflow.com/questions/50509420/java-spring-create-a-separate-datasource-bean-throws-nonunique-error – We are Borg May 24 '18 at 12:33
0

change the Postgres driver and change your setting for hikaricp

spring.datasource.hikari.minimumIdle=5
spring.datasource.hikari.maximumPoolSize=20
spring.datasource.hikari.idleTimeout=30000
spring.datasource.hikari.poolName=SpringBootJPAHikariCP
spring.datasource.hikari.maxLifetime=2000000
spring.datasource.hikari.connectionTimeout=30000   
spring.jpa.hibernate.connection.provider_class=org.hibernate.hikaricp.internal
.HikariCPConnectionProvider