Questions tagged [rocksdb]

RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads.

About

RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads.

RocksDB builds on LevelDB to be scalable to run on servers with many CPU cores, to efficiently use fast storage, to support IO-bound, in-memory and write-once workloads, and to be flexible to allow for innovation.

Links

474 questions
2
votes
0 answers

How can the supress operator result in in an OOM if configured with an unbounded buffer?

.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()) I currently use the above construct to configure a suppress operator in my kstreams topology. The documentation for the unbounded strict-buffer config mentions the…
Abhijith Madhav
  • 2,748
  • 5
  • 33
  • 44
2
votes
1 answer

Why rocksDB needs multiple levels?

All of the keys in rocksDB's level 1 are already sorted. Therefore we can get key quickly in this level. Why does rocksDB still need to compact the files in level 1 to level 2? I find an explanation on LevelDB's doc: Open file in one directory is…
YjyJeff
  • 833
  • 1
  • 6
  • 14
2
votes
1 answer

Key Value Database Modeling for searchability

Let's say I am building a marketplace like eBay (or something) for example, With a data that looks like this (pseudo-code): public class Item { Double price; String geoHash; Long startAvailabilty; // timestamp Long endAvailabilty; //…
quarks
  • 33,478
  • 73
  • 290
  • 513
2
votes
1 answer

RocksDB is not freeing up space after a delete

Most of our services are using a Kafka store, which as you know is using RocksDB under the hood. We are trying to delete outdated and wrongly formatted records every 6 hours, in order to free up space. Even though the record gets deleted from…
iliev951
  • 33
  • 4
2
votes
1 answer

How to delete a row for EmbeddedRocksDB table engine in ClickHouse?

We use regular insert for inserting into EmbeddedRocksDB tables. Inserting a new value for a key updates the value. There is no DELETE FROM rocksTable where xxx in Clickhouse. Inserting NULL also doesn't work, which just sets default values for the…
ramazan polat
  • 7,111
  • 1
  • 48
  • 76
2
votes
1 answer

Having consumer issues when RocksDB in flink

I have a job which consumes from RabbitMQ, I was using FS State Backend but it seems that the sizes of states became bigger and then I decide to move my states to RocksDB. The issue is that during the first hours running the job is fine, event after…
Alter
  • 903
  • 1
  • 11
  • 27
2
votes
1 answer

How to use RocksDB tailing iterator?

I am using RocksDB Java JNI and would like to get new entries as they are added to the RocksDB. Thread t = new Thread(() -> { for (int i = 0; i < 1000; i++) { try { System.out.println("Putting " + i); …
JavaTechnical
  • 8,846
  • 8
  • 61
  • 97
2
votes
2 answers

Can we update a state's TTL value?

We have a topology that uses states (ValueState and ListState) with TTL(StateTtlConfig) because we can not use Timers (We would generate hundred of millions of timers per day, and it does scale : a savepoint/checkpoint would take hours to be…
Rocel
  • 1,029
  • 1
  • 7
  • 22
2
votes
0 answers

RocksDB: Unhandled exception thrown: read access violation

I have simple rocksdb program where I create a DB and put some values in it using transactions. using namespace rocksdb; std::string kDBPath = "D:\\Newdata"; int main() { // open DB Options options; TransactionDBOptions…
Kushal Warke
  • 21
  • 1
  • 5
2
votes
1 answer

Multiple rocksdb Instances: Use a Single Shared Cache or Multiple Independent Caches?

We are opening multiple rocksdb instances into a single process and they are all accessed equally. When using BlockBasedTableOptions::block_cache is there any benefit allocating a single large cache over several smaller caches? With NewLRUCache it…
Platypi
  • 29
  • 2
2
votes
1 answer

Whenever I put value in rocksdb for the same key, the value get updated and count also gets increased

Whenever I put value in rocksdb for the same key. The value get updated. But the count by the following method db.getLongProperty(columnFamily, "rocksdb.estimate-num-keys") gets incremented. Why am I getting this weird behavior?
Venkatesh R
  • 21
  • 1
  • 3
2
votes
1 answer

Flink rocksdb compaction filter not working

I have a Flink Cluster. I enabled the compaction filter and using state TTL. but Rocksdb Compaction Filter does not free states from memory. I have about 300 record / s in my Flink Pipeline My state TTL config: @Override public void…
Mohammad Hossein Gerami
  • 1,360
  • 1
  • 10
  • 26
2
votes
1 answer

Kafka Streams - RocksDB - max open files

If we define the max open files as 300, and if the number of .sst files exceed, I assume that the files in cache will be evicted, but if the data in those files being evicted were to be accessed, will it reload it OR that file is lost for…
Raman
  • 665
  • 1
  • 15
  • 38
2
votes
1 answer

Flink - Failed to recover from a checkpoint

I'm running my cluster on kubernetes with a single jobmanager and 2 taskmanagers. I tested the mechanism of checkpoint by killing one of the taskmanager pods while the job is running. I got the following exceptions on the jobmanager and the…
Yair Cohen
  • 417
  • 4
  • 16
2
votes
2 answers

Apache Kafka StateStore

I am learning Apache Kafka (as a messaging system) and in that process came to know of term StateStore , link here I am also aware of Apache kafka streams, the client API. Is StateStore applicable for Apache kafka in the context of messaging systems…
CuriousMind
  • 8,301
  • 22
  • 65
  • 134