3

I am using RocksDBJava after running the service for sometime I can see the "Too many open files" exception. Digging through the previous issue mentioned on the portal I found out that it is because of system limitation of opening a maximum number of files. When I check the directory, which the rocksDB is using, I noticed there is over 100K sst files of 1KB size, and this can be the reason of the given error. I wanted to know is there any way by which we can configure the rocksDB to generate SST files of large size, so that the total number of files created is minimum and we can avoid this error.

Also in my current project there are many read threads and one write thread, and I open and close the connection (using RocksDB.open() and RocksDB.close()) before reading or writing to the rocksDB.

ilim
  • 4,477
  • 7
  • 27
  • 46
AmanSinghal
  • 2,404
  • 21
  • 22

3 Answers3

3

You can use these two options to create larger SST files: target_file_size_base and target_file_size_multiplier. See the doc for details.

Also, you can use the max_open_files option to limit the number of files that RocksDB can open. However, in order to get good performance, I suggest you increasing the system limitation on the max number of open files, and configure max_open_files to be -1.

for_stack
  • 21,012
  • 4
  • 35
  • 48
  • Thanks, can you please help how can I set these values in JavaRocks. – AmanSinghal Aug 06 '17 at 08:45
  • I checked the API, but I couldn't found any way to set these value using java API. – AmanSinghal Aug 06 '17 at 08:51
  • @AmanSinghal It seems that Java API uses another naming rule: `org.rocksdb.Options`, `setTargetFileSizeBase`, `setTargetFileSizeMultiplier`, `setMaxOpenFiles` – for_stack Aug 06 '17 at 08:54
  • Which version you checked, I am not able to find this method in 5.1.2 version. – AmanSinghal Aug 06 '17 at 09:01
  • Also, I checked the value of target_file_size_base in generated options file, it is showing as 67108864 value, which is very large, still I am not able to understand why it is creating too many sst files. – AmanSinghal Aug 06 '17 at 09:06
  • what I generally noticed is that with every new key in the column family it is creating a new sst file. – AmanSinghal Aug 06 '17 at 09:09
  • I checked this one: https://github.com/facebook/rocksdb/blob/master/java/src/main/java/org/rocksdb/Options.java – for_stack Aug 06 '17 at 09:45
  • `with every new key in the column family it is creating a new sst file` That's strange, and I didn't get the problem before. – for_stack Aug 06 '17 at 09:47
0

You can also try setting a higher value for write_buffer_size that should create fewer and larger SST files.

Naman Shah
  • 35
  • 4
-1

By default, when you call RocksDB.open() the RocksDB reads all WAL to recover the memtable, then do a flush() writing the memtable as an SST file. That's why you got so many small SST files.

To avoid this behaviour set option avoid_flush_during_recovery to true when openning DB. The Java version is named avoidFlushDuringRecovery(). Also, never call flush or do implicitly flush (such as createCheckpoint etc.) in your code.

See the code here.

Eric Fu
  • 79
  • 1
  • 3