I am trying to load several million rows of data into a table (a "follow" table that contains two foreign keys to the user table, and the associated indexes on those keys) in a single transaction. My initial attempt caused my script to crash because system memory was exhausted.
Some research resulted in the conclusion that the crash was because of the foreign key constraints, so I verified that the table was empty (that is, the transaction that resulted in the process being killed did not complete) and modified my script to drop the foreign key constraints and indexes in order to insert the data. My intention was to recreate the constraints and indexes afterwards.
However, the ALTER TABLE DROP CONSTRAINT command to drop the first foreign key constraint on the table is taking a very long time (tens of minutes), despite the table being completely empty.
The only thing I can think of is that it is related to the large amount of data I wrote to the table and then did not commit, because the script crashed. But of course, since the transaction was not committed, I cannot find any trace of that data in the database.
What could cause this query to be slow (or possibly not run at all; at the time of this writing it is still ongoing) and how can I avoid it?
There are other transactions open in the database (several-hour-long transactions migrating other very large tables) but none of those transactions are touching the follow table.
Edit: pg locks are as below:
db=# select relation::regclass, * from pg_locks where not granted;
-[ RECORD 1 ]------+--------------------
relation | auth_user
locktype | relation
database | 53664
relation | 54195
page |
tuple |
virtualxid |
transactionid |
classid |
objid |
objsubid |
virtualtransaction | 5/343
pid | 17300
mode | AccessExclusiveLock
granted | f
The pid above (17300) is simply the ALTER TABLE query itself. There are no other locks and no processes waiting for locks.