I am working with a 500gb sqlite3 database (many rows, few columns), and I am trying to create a new column and fill it out with some values. After adding a new column named QUOTE_DATE
, I ran:
UPDATE
DATA
SET
QUOTE_DATE = date(QUOTE_DATETIME);
where DATA
is the name of the table and QUOTE_DATETIME
is some other column that I am extracting dates from via the date
function.
I have tested this in a much smaller (1gb) version of the database and the code worked fine. However, when I tried running this code for the entire database things failed, and I ended up with a Killed: 9
.
Question: How can I fix this problem without updating in chunks?
Details: This database is stored in a external hard drive, connected to my computer all the time. My computer's hd is much smaller and cannot store the database. I update the database directly on the external hard drive. After some time through the UPDATE
process, I noticed that my computer's hard drive started filling up. It kept filling until it was completely full. Then, after a couple of minutes the UPDATE
process stopped due to a Killed: 9
.
Updates
Possible Solution: This stack provides a decent solution to the problem: update the database in chunks. That would involve some work, but it is doable. However, I'd still like to understand why sqlite crashes when it shouldn't.
Possible Explanation: It seems that SQLite3 creates a journal while updating, in case something bad happens, like a power failure or a system crash. This journal is used to automatically rollback the database when it is next accessed. Since the database is so big, the journal file might be very big and occupy the entire external hard drive. It might be the case that SQLite then moves to the next available disk and starts writing there. When the next disks (my computers hd) fills out, the system kills the sqlite process to avoid issues. This seems plausible, but unlikely since my external hard drive still has around 800gb of space left, which seems enough to make a complete copy of the original database.