I'm trying to read a large table from Oracle database and save it as local csv file in Python 3. Here's my code:
import cx_Oracle
import pandas as pd
user = 'me'
password = 'password'
dsn = 'dsn'
con = cx_Oracle.connect(user, password, dsn)
for chunk in pd.read_sql("select col_a, col_d, col_s from my_table", con, chunksize=10**4):
chunk.to_csv(r"my_path\my_file.csv", index = false)
However, given the table has 200k+ rows, and I'm selecting a dozen columns out of 80+ columns, the performance of the code above is literally crawling.
Is there a faster way to read the table and save it as csv?