I have a Postgres database defined with the public
schema and it is accessed from within a Python application. Before doing a pg_dump
, the tables can be accessed without using a qualified table name, for example select * from user
works. After doing the pg_dump
, select * from user
fails with a relation "user" does not exist
error, but select * from public.user
works. No restore has been performed.
Since this is within an application, I cannot change the access to include the schema. The application uses sqlalchemy
and pgbouncer
for interacting with the database.
In trying to figure out what's happening, I've discovered that running pg_dump
causes the session to change. Before running the command, by querying pg_stat_activity
, I can see there are 10 sessions in the pool, one active and nine idle. After running the command, a different session is active and the other nine are idle. Also, the settings in pg_db_role_setting
and pg_user
look correct for the session that I can see. But, even when those look correct, the query select * from user
fails.
Also, just for reference, the code currently does not contain pg_dump
and runs fine. As soon as I add the pg_dump
, I see the issues mentioned.
Is there anything in pg_dump
that could be causing what I'm seeing or is it just a result of going to another session? Since these are some of the first commands being run after running migrations, is there any reason the sessions should have different settings? What else am I missing?
Thanks!