0

I am getting an error as below.

org.apache.commons.dbcp2.LifetimeExceededException: The lifetime of the connection [198,970] milliseconds exceeds the maximum permitted value of [60,000] milliseconds

What exactly I am doing ?

  • I am trying to do a load run of 500 query updates and deletes, into MS SQL 2019. Out of this 500, we can consider 50 query as a part of a transaction boundary. Which means only 10 transaction sets are run per second.

Now I am getting a below error in Mule ESB 3.8.0 and I have a connection pool of 256 as min size. If any one can suggest, what could be the reason? Will it be because the query is spending more time in DB ? Will it be a DB query issue? As I am seeing some transaction set (50 queries) spending more than 3 minutes.

I am using Hibernate queries

I am not sure where to kick off the analysis and how to handle this. Can any one suggest?

More Error:

2023-04-13 11:04:41,052 [4616] [WARN] commons.dbcp2.BasicDataSource | An internal object pool swallowed an Exception.
org.apache.commons.dbcp2.LifetimeExceededException: The lifetime of the connection [198,970] milliseconds exceeds the maximum permitted value of [60,000] milliseconds
        at org.apache.commons.dbcp2.PoolableConnectionFactory.validateLifetime(PoolableConnectionFactory.java:424) ~[commons-dbcp2-2.1.1.jar:2.1.1]
        at org.apache.commons.dbcp2.PoolableConnectionFactory.passivateObject(PoolableConnectionFactory.java:364) ~[commons-dbcp2-2.1.1.jar:2.1.1]
        at org.apache.commons.pool2.impl.GenericObjectPool.returnObject(GenericObjectPool.java:578) [commons-pool2-2.4.2.jar:2.4.2]
        at org.apache.commons.dbcp2.PoolableConnection.close(PoolableConnection.java:206) [commons-dbcp2-2.1.1.jar:2.1.1]
aled
  • 21,330
  • 3
  • 27
  • 34
Mahesh A R
  • 11
  • 4
  • This is offtopic but note that Mule 3.8.0 was released in 2016. You should really consider migrating to a newer version just because of the unpatched bugs and security vulnerabilities. – aled Apr 18 '23 at 11:28

1 Answers1

0

That's not exactly an error. It is a warning that the connection was closed because it exceeded the maximum time configured in the DBCP2 database connection pool which seems to be of 60 seconds according to the error (or 60000 milliseconds). That's internally handled by the pool. You only need to worry about that if it is causing some issue to the application. Otherwise just ignore it because it is just the expected behavior of a database connection pool. For example if your transactions are taking 3 minutes, which sounds an excessive time for a database operation, you might want to increase the lifetime. But only if transactions are actually failing.

From the Mule side you could try as a basic profiling technique to capture several thread dumps and analyze if many threads are waiting on answers of the database or doing something else. I would expect that they are waiting on the database but there is not enough information. You'll have to learn how to interpret thread dumps and how to relate it to the application on your own.

From the database side a database admin (DBA) can use administrative tools to understand load on the database and identify bottlenecks. For example in some cases lack of appropriate indexes could introduce delays to queries.

As it is the question is too abstract. From a symptom on the application that not necessarily is an issue you are trying to infer a problem. You need to look into the queries and the database and start analyzing the behavior before looking into the pool. That doesn't meant that the problem could not be in the pool, but there are many factors in this kind of problem, and without providing specific details it is usually not possible to point to a single root cause.

aled
  • 21,330
  • 3
  • 27
  • 34
  • Yes, it is a Warning, but I am seeing like, when I update 1 message / second, the update for one message (a transaction set - approx 50 queries) takes 400 milliseconds to update and complete. But when I am firing at the rate of 10 messages/second then slowly it grows to 3 minutes for the transaction set. I doubt this could be the reason why the connection is closed as I have mentioned 60 seconds. But my question here is, can it also be a delay at DB level which is causing this issue? – Mahesh A R Apr 18 '23 at 12:16
  • 3 minutes is longer than 60 seconds. If the connection was in use several minutes it exceeded the allowed lifetime and will not be reused and should be closed. If the cause of the delay comes from the database or somewhere else is impossible to know with the information provided. It is possible since the time increase. It could data too. You should do a more thorough profiling to find it. That kind of analysis is usually not a good match for a Stackoverflow question, which should be more specific. I'll add a bit more guidance to the answer. – aled Apr 18 '23 at 13:40
  • Thanks, today we are also planning to run monitoring tools at DB level to see why DB query time is taking more time. What is holding a query so long, when we run for 10 messages/second. – Mahesh A R Apr 19 '23 at 03:59
  • What noted now is that, there were some indexing issues in the DB tables. So we started to trap the delaying queries one after the other and started added indexing, this warning issue got reduced. Also now the query delay is not showing evidently on the Java profilers as well. As still we are noting this warning at few instances, I have requested our DBA to monitor and find if there are any such delaying queries and double check the indexing. If that resolves fully, then I will post that as the answer – Mahesh A R Apr 19 '23 at 11:08