I have a sqlite database which is about 75 GB. It takes almost 4 hours to create an index on the database. After indexing, the file size is about 100 GB.
Often, I have to modify (insert/delete/update) large chunks (few GBs) of data. As of now, I am deleting the index before modifying the tables. After the modifications are complete, the indexes are recreated.
Dropping the index takes huge amount of time (it is of the same order as creating the index).
In some very special cases (when entire data needs to be regenerated), I am able to write to a new database file and replace it with the original one. This strategy does not require to me drop the indices.
What can I do to speedup the index deletion in cases I cannot just switch the database files? Any suggestions/ideas are welcome.
This is I think one of the limitations of single file databases. If tables/indices were stored in separate files, then those files could simply be marked deleted.