I have a program in Java that creates a log file about 1K in size. If I run a test that deletes the old log, and creates a new log, then saves it, repeated a million times, if the size of the file grows over time (up to a few mb's), will I risk damage to my SSD? Is there a size limit for the log file that could avoid this risk, or can anyone help me understand the mechanics of the risk?
-
6Operating Systems do this much more than you can imagine. So SSDs are designed to deal with it via wear-leveling algorithms. – Mysticial May 20 '14 at 18:43
-
The file increasing in size is not an issue, since sectors already written won't be overwritten (unless the OS decides to defrag, which I find unlikely in an SSD). Of course, writting and deleting a million times a file that uses 100 sectors will mean 100 the wear of overwritting and deleting amillion times a file that uses 1 sector. – SJuan76 May 20 '14 at 18:46
-
@Mysticial any idea on a theoretical limit? – user2827214 May 20 '14 at 18:55
-
@user2827214 Depends on the quality of the SSD. The difference between a good SSD and a bad SSD could be several orders of magnitude. – Mysticial May 20 '14 at 18:57
-
The real question is: why would a test need to do all that million time writing stuff on a physical disk, whether it is a good SSD or a poor HDD? The only good reason for that would be to test the wear of hardware in case that in production a log file is written or rewritten millions of times. But well, this is exactly what the author is weary of -- put his hardware under test :) – Oleg Sklyar May 20 '14 at 21:24
2 Answers
In the case of constant same file open/close with gradual file size increase there are 2 protection mechanisms at File System and SSD levels that will prevent early disk failure.
First, on every file delete, File System will initiate Trim (aka Discard aka Logical Erase) command to the SSD. Trim address range will cover entire size of deleting file. Trim greatly helps SSD to reclaim free space for the new data. Using Trim in combination with Writes when accessing the same data range is the best operational mode for SSD in terms of saving its endurance. Just make sure that your OS has Trim operation enabled (usually it is by default). All modern SSDs should support it as well. Important notice, Trim is logical erase, it will not initiate immediate Physical Media erase. Physical Erase will be initiated later indirectly as a part of SSD internal Garbage Collection.
Second, when accessing the same file, most likely File System will issue Writes to the SSD at the same address. Just amount of writes will grow as file size grows. Such pattern is known as Hot Range access. It is nasty pattern for SSD in terms of endurance. SSD has to allocated free resources (physical pages) on every file write but lifetime of the data is very short as the data is deleted almost immediately. Overall, amount of unique data is very low in SSD Physical Media, but amount of allocated and processed resources (physical pages) is huge. Modern SSDs has protection from Hot Range access by using Physical Media Units in round robin manner that evens the wear.
I advise to monitor SSD SMART Health Data (Life-time left parameter), for example by using https://www.smartmontools.org/ or Software provided by SSD Vendor. I will help to see how your access pattern is affecting endurance.

- 46
- 3
Like with any file, if the disk doesn't contain enough space to write to a file, the OS (or Java) won't allow the file to be written until space is cleared. The only way you can "screw up" a disk in this manner is if you mess around with addresses at the kernel level.

- 311
- 2
- 9