Questions tagged [rocksdb]

RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads.

About

RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads.

RocksDB builds on LevelDB to be scalable to run on servers with many CPU cores, to efficiently use fast storage, to support IO-bound, in-memory and write-once workloads, and to be flexible to allow for innovation.

Links

474 questions
2
votes
1 answer

Why do I have to configure a state store with Kafka Streams

Currently I have the following setup: StoreBuilder storeBuilder = Stores.keyValueStoreBuilder( Stores.persistentKeyValueStore("kafka.topics.table"), new SomeKeySerde(), new…
adpap
  • 209
  • 3
  • 10
2
votes
3 answers

Creating RocksDB SST file in Java for bulk loading

I am new to RocksDB abd trying to create a SST file in Java for bulk loading. Eventual usecase is to create this in Apache Spark. I am using rocksdbjni 6.3.6 in Ubuntu 18.04.03 I am keep getting this error, org.rocksdb.RocksDBException: Keys must be…
Saba
  • 41
  • 4
2
votes
0 answers

Kafka Stream State Store using RocksDB-Cloud

Is it possible to configure Kafka Streams to use RockesDB-Cloud rather than the default RockesDB database as storage engine? If so, is there any configuration recipe? I would like to persist data on S3 buckets instead of local filesystem.
dbaltor
  • 2,737
  • 3
  • 24
  • 36
2
votes
1 answer

Flink RocksDB not creating sst files in taskmanager

I am using flink-1.4.2 with scala and RocksDB is used for state backend but I am not getting sst files in taskmanager.
2
votes
2 answers

Build rocksdb static library inside R package

I tried to use the rocksdb inside R package. I used the following src/Makevars: CXX_STD = CXX11 PKG_CPPFLAGS = -I./rocksdb/include/ PKG_LIBS = rocksdb/librocksdb.a -lbz2 -lz -lzstd -llz4 -lsnappy $(SHLIB):…
Artem Klevtsov
  • 9,193
  • 6
  • 52
  • 57
2
votes
1 answer

How to organize key-values to implement Redis ZSet commands by RocksDB?

I implement Redis ZSet commands by RocksDB, and iterator is really slow with big key, how can I optimize these key-values? I devide 1 zset key-element-score with 3 RocksDB key-values, such as name2age:linda:25 meta key: "name2age": 1 // count one…
wuYin
  • 21
  • 3
2
votes
1 answer

Rocksdb compaction not triggered or not happening

I have 2 kafka streams state stores implemented. They are both persistent key value stores. The problem that I am facing is the compaction of rocksdb is happening only in one of the state stores and the other state store is just piling on more sst…
2
votes
2 answers

How to get a sorted KeyValueStore from a KTable?

I want to materialize a KTable from KStream and I want the KeyValueStore to be sorted by the Key. I tried looking up the KTable API Spec (https://kafka.apache.org/20/javadoc/org/apache/kafka/streams/kstream/KTable.html), but no 'sort'-method exists.…
Sanjay Das
  • 180
  • 3
  • 14
2
votes
1 answer

Kafka Streams: Dynamically Configure RocksDb

I want to tune performance of Kafka Streams, and for that I have to play with RocksDb configuration values. I see I can use the StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG to set the configuration for RocksDB. Like shown here. But I would like…
user1028741
  • 2,745
  • 6
  • 34
  • 68
2
votes
2 answers

Why are Get and MultiGet significantly slower for large key sets compared to using an Iterator?

I'm currently playing around with RocksDB (C++) and was curious about some performance metrics I've experienced. For testing purposes, my database keys are file paths and the values are filenames. My database has around 2M entries in it. I'm…
kennyc
  • 5,490
  • 5
  • 34
  • 57
2
votes
0 answers

kafka streams - number of open file descriptors keeps going up

Our kafka streaming app keeps opening new file descriptors as long as they are new incoming messages without ever closing old ones. It eventually leads to exception. We've raised the limit of open fds to 65k but it doesn't seem to help. Both Kafka…
sumek
  • 26,495
  • 13
  • 56
  • 75
2
votes
1 answer

Kafka kstream-kstream joins with sliding window memory usage grows over the time till OOM

I'm having a problem using kstream joins. What i do is from one topic i seperate 3 different types of messages to new streams. Then do one innerjoin with two of the streams which creates another stream, finally i do a last leftjoin with the new…
kambo
  • 129
  • 2
  • 11
2
votes
2 answers

Is RocksDB a good choice for storing homogeneous objects?

I'm looking for an embeddable data storage engine in C++. RocksDB is a key-value store. My data is very homogeneous. I have a modest number of types (on the order of 20), and I store many instances (on the order of 1 million) of those types. I…
Boinst
  • 3,365
  • 2
  • 38
  • 60
2
votes
1 answer

How to set TTL on Rocks DB properly?

I am trying to use Rocks DB with TTL. The way I initialise rocks db is as below: options.setCreateIfMissing(true) .setWriteBufferSize(8 * SizeUnit.KB) .setMaxWriteBufferNumber(3) .setCompressionType(CompressionType.LZ4_COMPRESSION) …
2
votes
1 answer

RocksDB in Kafka stream reporting no space when there is space available

I have a Streams application with a GlobalKtable backed by RocksDB that’s failing. I was originally getting the error described in https://issues.apache.org/jira/browse/KAFKA-6327, so I upgraded RocksDB to v5.14.2, which now gives a more explicit…
Ickster
  • 2,167
  • 4
  • 22
  • 43