With:
pip install mysql-connector-python
import mysql.connector
I have a db with millions of rows, so the fetchall() method was causing some memory issues.
db_cursor.execute(sql_query, sql_values)
for row in db_cursor.fetchall():
yield row
So i have now changed the code as follows. I am now iterating over the cursor as such:
db_cursor.execute(sql_query, sql_values)
for row in db_cursor:
yield row
It seems to be working fine and the full db is not loaded in memory.
Now the problem is that while I'm iterating over the generator, I need to perform another query.
So to give you an idea, here is the flow.
Start iteration over the cursor.
For each row extract the row ID.
With that row id, perform another query. (and here is the problem, mysql.connector raises)
raise InternalError("Unread result found") mysql.connector.errors.InternalError: Unread result found
I know this is because I haven't finished iterating over the cursor but the odd thing with that is that I'm not using the same cursor.
To perform the new query I use the active connection and create a new cursor:
db_cursor = self._db_connection.cursor(prepared=True, dictionary=True)
So my question is why is it raising the InternalError("Unread result found")
even if i'm using a different cursor?
Is it because I'm creating the new cursor from the same opened connection?
Would I need to create a new connection too?