6

I have a sqlite database which is about 75 GB. It takes almost 4 hours to create an index on the database. After indexing, the file size is about 100 GB.

Often, I have to modify (insert/delete/update) large chunks (few GBs) of data. As of now, I am deleting the index before modifying the tables. After the modifications are complete, the indexes are recreated.

Dropping the index takes huge amount of time (it is of the same order as creating the index).

In some very special cases (when entire data needs to be regenerated), I am able to write to a new database file and replace it with the original one. This strategy does not require to me drop the indices.

What can I do to speedup the index deletion in cases I cannot just switch the database files? Any suggestions/ideas are welcome.

This is I think one of the limitations of single file databases. If tables/indices were stored in separate files, then those files could simply be marked deleted.

Kara
  • 6,115
  • 16
  • 50
  • 57
Santosh Tiwari
  • 1,167
  • 2
  • 14
  • 26
  • Not that it won't work, but I think your problem may be related to having 100GB in SQLite. I've never heard anyone use it with that amount of data. – Joachim Isaksson Aug 23 '12 at 16:11
  • 1
    I cannot control the amount of data (except specifically splitting the information and saving them into different database file myself) that the user generates. The good thing is even with a 100GB file (with 3B records), the queries are still under 1 second. – Santosh Tiwari Aug 23 '12 at 16:15
  • 1
    There is no reason SQLite can't handle databases of size 100GB, but as with all relational databases, performance of some operations (particularly table or index rebuilds) will greatly suffer if there is not enough RAM to hold all the database at once. Try modifying some indexes on a 100GB MySQL or Postgres table with limited RAM and it will also take forever. – thomasrutter Sep 02 '16 at 00:29

0 Answers0