I am recording data quite rapidly in a SQLite Database, which results in a lot of fragments on the disc (typically 2500 frags/file). When I later on try to search through this database it takes a long time, mostly due to the fact that I have a hard disk drive (i.e., not a ssd). My goal is to get the fragmentations down to a minimum since this will increase my search performance.
I have tried setting the chunk size to ~100Mb, but if I exceed this amount of data it will start adding small records again, which results in fragmentations. I would like to have a specific chunk size that the database would increase when it's exceeding the current size, e.g. 100Mb -> 200Mb -> 300Mb. I actually though chunk size did this automatically, but does seem like it.
Storage capacity is not an issue for me, I can withstand some overshoot on the database size (database being e.g. 1Gb, but data is only 700Mb).
Any suggestions are more than welcome :)
The code below shows how I set up the SQLite connection:
SQLiteConnectionStringBuilder connBuilder = new SQLiteConnectionStringBuilder();
connBuilder.DataSource = dbFilename;
connBuilder.DateTimeKind = DateTimeKind.Utc;
connBuilder.Version = 3;
connBuilder.PageSize = 65536;
connBuilder.JournalMode = SQLiteJournalModeEnum.Wal;
var connection = new SQLiteConnection(connBuilder.ToString());
...
using (connection)
{
connection.Open();
connection.SetChunkSize(10000000);
....
}