2

If we define the max open files as 300, and if the number of .sst files exceed, I assume that the files in cache will be evicted, but if the data in those files being evicted were to be accessed, will it reload it OR that file is lost for ever?

https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide

Raman
  • 665
  • 1
  • 15
  • 38

1 Answers1

1

From the link you posted:

max_open_files -- RocksDB keeps all file descriptors in a table cache. If number of file descriptors exceeds max_open_files, some files are evicted from table cache and their file descriptors closed. This means that every read must go through the table cache to lookup the file needed. Set max_open_files to -1 to always keep all files open, which avoids expensive table cache calls.

This only means that if the number of open files is exceeded, some files will be closed. If you want to access a close file, the corresponding file will be re-opened (and maybe before, another file would be closed).

Hence, the config is not about creating/deleting files, but just about how many files to keep open in parallel.

Matthias J. Sax
  • 59,682
  • 7
  • 117
  • 137
  • I have set this option to 1024 following link https://docs.confluent.io/current/streams/developer-guide/config-streams.html#rocksdb-config-setter but when I count the number of open sst files, its still in over 10k open sst files. Is there something else that's need to be configured in parallel? I have 5 instances on single machine configured to use 4 threads, could that be the reason? Also please note that the Stream application creates 8 Global Tables – user482963 May 21 '20 at 07:29
  • Well, `max_open_files` applies to each store; if you have 8 GKT, you have 8 stores, so you allow for 8 * 1024 open files. Not an expert on RocksDB; maybe there are other open files that don't count to the limit? – Matthias J. Sax May 21 '20 at 18:10
  • For me, RocksDB just returns an error: `Too many open files` and that's the end of the story. – Linas Apr 11 '21 at 15:54
  • @Linas could also be an OS limit you are hitting... – Matthias J. Sax Apr 11 '21 at 23:52
  • Yep. It's the `ulimit -n` limit. By default, rocks does not examine `getrlimit(RLIMIT_NOFILE)` and will happily open more files than the system allows, whereupon reads fail and writes are ignored. Failure to delete iterators will cause rocks to splurge with creating `*.sst` files, which will quickly exceed system disk and RAM resources. – Linas Apr 13 '21 at 20:48