7

I'm trying to run a large query using SQLAlchemy with the following code on a Postgres 9.4 RDS server set up as a read replica.

    # self.sa_engine is a SQLAlchemy engine
    with self.sa_engine.connect() as conn:
        conn = conn.execution_options(stream_results=True)

        # pd = pandas
        # self.sql = "select * from mylargetable"
        for chunk in  pd.read_sql(self.sql, conn, chunksize=50000):
            # do stuff, write file, etc....

The problem is that I get the following error after about 30-60 seconds. During this time, files are being written as expected.

TransactionRollbackError: terminating connection due to conflict with recovery
DETAIL:  User query might have needed to see row versions that must be removed.

Everything I've googled says to set the following parameters on the read replica in RDS:

hot_standby_feedback=1
max_standby_archive_delay=600000
max_standby_streaming_delay=600000

With these parameters set, I would expect to get the above error only if the query ran for longer than 10 minutes, but I'm getting it after 30-60 seconds.

Additionally, my understanding of this issue is that it would only occur if the table in the master database was being modified while the query on the replica is running. However, this table hasn't been updated in months.

All of this works when I run it against the master database (which I can't do in production) and when I run it against smaller tables on the read replica.

I'm totally stumped and would appreciate any help.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470

2 Answers2

5

Right after asking this question I searched for parameters that were set to 30 seconds by default and found the solution. There's one more parameter I had to adjust:

wal_receiver_timeout=600000

Setting that did the trick!

  • 1
    That's surprising. It must be something specific to RDS, because [according to the documentation](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC_WAL_RECEIVER_TIMEOUT) it should just terminate the replication connection between primary and standby if the former goes down, not interrupt long queries. – Laurenz Albe Nov 27 '18 at 03:27
  • 1
    Yeah, another strange thing about this situation is that I had a similar process running using Ruby postgres libraries and had no problem. I think there's something about the mechanism by which psycopg2 is setting up the streaming that is causing it. – Sterling Paramore Nov 27 '18 at 16:46
1

If for any reason you do not have access to change the database's configuration, and/or if that does not help. A fully client side fix could be:

Using isolation_level="REPEATABLE_READ"

my_engine = sqlalchemy.create_engine(f"{my_db_url}", isolation_level="REPEATABLE_READ")

This is similarly suggested in the thread: https://www.postgresql.org/message-id/7F74C5EA-6741-44FC-B6C6-E96F18D761FB@simply.name

This "fix" should of course be used with knowledge on isolation levels: https://www.postgresql.org/docs/14/transaction-iso.html.