First, my use case: On my Linux-based server I am getting unsatisfactory disk IO performance for small files, and am limited to the approximately 100 IOPS that a 7200rpm HDD will support. This is of course expected, and I am seeking a way to improve performance. It is especially problematic since I am working with code bases including 10,000s of source files and objects. The total amount of data is not economical for SSDs. Separating the large files (that take up the majority of storage) and small files is not possible.
The typical solution would be to use a cache system like lvmcache, but the way I understood it, in the standard configuration it would only provide a performance benefit for frequently used files (please correct me if I'm wrong!). This does not fit my use case. The files are accessed quite randomly and rarely.
Thus the question: Is it possible to configure a cache to prefetch small files and does this make sense? They only make up a small percentage of the total storage utilization and would fit completely on an SSD. I would like them to live there permanently for on-demand access. I see no inherent technical issue, but I was unable to find any such documented behavior, except for some supercomputer data storage systems ^^