19

I have a Flask web application using SQLAlchemy with MySQL, and I have set up a scopedsession(). I also have a teardown_request handler that calls session.remove() after every request is finished. For some odd reason, if no requests are made on the web app for a day or more, the app gets "Operationalerror: MySQL Server has gone away".

In my debugging mission, I looked at SHOW PROCESSLIST and saw the following:

39817253 | sqladmin | my_host | kb_dev   | Sleep   |  174 |

The 174 is the number of seconds the connection from my application has been "sleeping". It keeps counting up if the application doesn't make another request.

It seems like my application holds on to the connection to MySQL even after my request has finished! And there is usually only one process no matter how many requests I make with my application, simultaneously or not.

My question is if it is normal for the connection to be "sleeping" this long? I'm pretty sure the extended sleeping is causing MySQL to cut the connection after a certain timeout which in turn is causing the "OperationalError: Mysql has gone away" error.

trinth
  • 5,919
  • 9
  • 40
  • 45

1 Answers1

30

SQLAlchemy's default behavior is to pool connections within the Engine:

http://www.sqlalchemy.org/docs/core/engines.html

http://www.sqlalchemy.org/docs/core/pooling.html

As far as the disconnect overnight thing, this is a known MySQL behavior, SQLAlchemy provides the pool_recycle flag to work around it. Here are many links which describe it:

http://www.sqlalchemy.org/docs/dialects/mysql.html#connection-timeouts

http://www.sqlalchemy.org/docs/core/pooling.html#setting-pool-recycle

http://www.sqlalchemy.org/docs/core/engines.html#sqlalchemy.create_engine (pool_recycle)

http://www.sqlalchemy.org/trac/wiki/FAQ#MySQLserverhasgoneaway

Blog post from just a few days ago:

http://douglatornell.ca/blog/2012/01/08/staying-alive/

zzzeek
  • 72,307
  • 23
  • 193
  • 185
  • Thanks zzzeek, it all makes sense now. I had no idea that connection pooling was enabled by default. But it looks like I'll be trying the pool_recycle option now. – trinth Jan 11 '12 at 19:17
  • @trinth were you able to solve your problem? I'm facing the same. – Shafiul Mar 19 '14 at 07:20
  • @giga it's been a while but I believe the pool_recycle option fixed it for me – trinth Mar 19 '14 at 17:37
  • pool_recycle or pool_size doesn't work for my situation: `self.engine = sqlalchemy.create_engine(url, pool_size=2, pool_recycle=1200)`. After 1200s, those connections are still in 'processlist'. – zzxwill Nov 21 '17 at 03:39