0

I am reading a csv from and api, and I am able to stream it into a pandas dataframe.

df = pd.read_csv(iterable_to_stream(reply.iter_content()),
                                         skiprows=7,
                                         dtype=str,
                                         na_filter=False)

I checked the dataframe and it all looks good. Then wanted to put that data into an Oracle table. Its only got 65 rows of VARCHAR2(100) so I did this

df.to_sql(name='KR_PERSON_DETAILS_CSV_PD',
          con=db.engine,
          index=False,
          if_exists='append',
          dtype={line: types.VARCHAR(100) for line in df.columns}
)

When I do this I get the following message: sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) ORA-01008: not all variables bound

How can this be? The table is created when I run it and I double checked all the columns.

CrabbyPete
  • 505
  • 9
  • 18
  • what is your connection string? change connetion string might solve your question. similar question is [here](https://stackoverflow.com/questions/47540837/how-to-write-pandas-dataframe-to-oracle-database-using-to-sql). – S.Hashiba Sep 17 '20 at 21:29
  • The connection string is a sqlalchemy engine. url =f"oracle+cx_oracle://user:password@{dsn}" self.engine = create_engine(url, echo=False, max_identifier_length=128). The sql it produces looks good. – CrabbyPete Sep 18 '20 at 13:10

1 Answers1

1

I discovered the problem. Some of the column names were larger than 30 characters. I shorted those column names greater than 30 characters and it works

CrabbyPete
  • 505
  • 9
  • 18