I have a table with around 70000 rows, and I am trying to load the entire table in Python using the psycopg2
module.
When I use this code, it takes around 110 seconds only to load the table.
with conn.cursor() as cursor:
cursor.execute("""SELECT * FROM Table;""")
rows = cursor.fetchall()
print(rows)
I want to ask what is the bottleneck job in the above Python code. Is it reading the table or converting the data into Python dictionaries and tuples? And is there any way to improve the runtime of the Python code?
Now, when I use the below code, it is able to load the entire table and print it in a file almost instantly, which indicates loading is not the problem.
COPY (SELECT * FROM Table) TO 'file.txt';
Why is there so much runtime difference between these two cases?
The table consists of id
and a dictionary in the form of JSONB.
Postgres 14.7
OS: Ubuntu 22.04 WSL (Everything running within the WSL)
Edit: This command from the terminal also takes only 0.83 seconds.
bin/psql -d db -c "SELECT * FROM Table;" -o output_file.txt