2

Currently I push my dataframes into several SQLite tables by using pandas basic functionality df.to_sql('df', if_exists="replace"). My idea is to 'optimize' the table afterwards in "DB Browser for SQLite" by setting primary keys, foreign keys, indizes and so on.

Problem: Everything gets overridden as soon as I use if_exist='replace' again.

What I basically want to do is to replace the data only without losing my database and table settings. I know that if_exist="append" would preserve my setting, but since I have to drop and replace the whole data quite often, appending alone won't do the job.

I know that I can also create the table and their metadata (PK, FK, etc.) from python directly but it feels way more complex. I'm confused about the correct path. Is there a best practice? Am I wrong and for instance: Is managing the database from python directly the better/smarter way than relying to Python itself and "DB Browser for SQLite"?

mucio
  • 7,014
  • 1
  • 21
  • 33
Christian
  • 515
  • 1
  • 6
  • 17

0 Answers0