Questions tagged [rocksdb]

RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads.

About

RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads.

RocksDB builds on LevelDB to be scalable to run on servers with many CPU cores, to efficiently use fast storage, to support IO-bound, in-memory and write-once workloads, and to be flexible to allow for innovation.

Links

474 questions
3
votes
2 answers

Is it possible to concurrently read from RocksDB?

I have the case where multiple Linux processes need to link with RocksDB library and concurently read (high load) the same database. The only one process updates database several times a day. Is it possible to concurrently read from within multiple…
3
votes
0 answers

ROCKSDB Failed to acquire lock due to rocksdb_max_row_locks limit

Version: 10.4.8-MariaDB engine: ROCKSDB I have a table labor with 40 Mio rows and a table map with 200,000 rows and I wanted to update some columns of labor with. Since I got performance problems as the table increased I decided to migrate from…
giordano
  • 2,954
  • 7
  • 35
  • 57
3
votes
0 answers

Kafka Stream rocksdb metrics not available

I am trying to monitor KStream rocksdb store. As per confluent documentation, RocksDB property-based metrics of type stream-state-metrics should be available at a metric recording level of INFO. I could observe the MBeans for the same from…
Sarvesh
  • 519
  • 1
  • 8
  • 16
3
votes
1 answer

Can RocksDB settings be changed with the java library while the database is open?

Using the java library, can any configuration changes take effect without requiring a reopen of the database? For example the level0SlowdownWritesTrigger. More context: I'm trying to flip between a bulk-load mode and regular mode. e.g. Disable…
Dan Tanner
  • 2,229
  • 2
  • 26
  • 39
3
votes
1 answer

Java application runs much slower when packaged as a MacOs app

I've written a Java application that I want to package for the main OSes so that I can provide it as a self-contained installable image. To do this, I use jpackage, with help from the best-named plugin I've come across, The Badass Runtime Plugin…
AndyW
  • 101
  • 7
3
votes
1 answer

Delete all keys from rocksdb (drop all)

I have a rocksdb instance with multithreaded read/write access. At some point an arbitrary thread needs to process a request to clear the whole database, basically delete all keys. How can I do it with the smallest disturbance to the other threads?…
3
votes
1 answer

ROCKSDB Failed to acquire lock due to rocksdb_max_row_locks

I'm trying to load a CSV in to my rocksdb database, but it fails and show me this error: Got error 10 'Operation aborted:Failed to acquire lock due to rocksdb_max_row_locks limit' from ROCKSDB I've tried with SET SESSION…
rogarui
  • 97
  • 1
  • 9
3
votes
1 answer

Is it safe to use embedded database (RocksDB, BoltDB, BadgerDB) on DigitalOcean block storage?

DigitalOcean block storage uses ceph which means that volume attached to the droplet would be physically located on a different machine. So a database file written to this volume would be using network, not the local disk. BoltDB specifically…
yname
  • 2,189
  • 13
  • 23
3
votes
0 answers

Kafka Streams with High Cardinality

I currently have a Kafka Stream service: { val _ = metrics val timeWindow = Duration.of(config.timeWindow.toMillis, ChronoUnit.MILLIS) val gracePeriod = Duration.of(config.gracePeriod.toMillis, ChronoUnit.MILLIS) val store =…
Bigicecream
  • 159
  • 1
  • 6
3
votes
1 answer

RocksDB - Double db size after 2 Put operations of same KEY-VALUEs

I have a program that uses RocksDB that tries to write huge amount of pairs of KEY-VALUE to database: int main() { DB* db; Options options; // Optimize RocksDB. This is the easiest way to get RocksDB to perform…
HuyLuyen
  • 43
  • 3
3
votes
1 answer

Does leveldb generate Bloom Filter every 4KB or 2KB of Data Block?

I have read the source code of leveldb. I found that when the size of Data Block reaches 4KB, it will flush the Data Block and call FilterBlockBuilder::StartBlock() to generate filter. void TableBuilder::Add(const Slice& key, const Slice& value) { …
Zihe Liu
  • 159
  • 1
  • 12
3
votes
1 answer

Use RocksDB to support key-key-value (RowKey->Containers) by splitting the container

Support I have key/value where value is a logical list of strings where I can append strings. To avoid the situation where inserting a single string item to the queue causing re-write the entire list, I'd using multiple key-value pairs to represent…
Kenneth
  • 561
  • 1
  • 5
  • 13
3
votes
1 answer

How does Flink make checkpoint asynchronously with RocksDB backend

I am using Flink with RocksDB. From the document of Flink I acknowledge that Flink will make checkpoint asynchronously when using RocksDB backend. See the descriptions in its doc. It is possible to let an operator continue processing while it…
Jerry Zhang
  • 192
  • 3
  • 10
3
votes
0 answers

RocksDB compaction: how to reduce data size and use more than 1 CPU core?

I'm trying to use RocksDB to store billions of records, so the resulting databases are fairly large - hundreds of gigabytes, several terabytes in some cases. The data is initially imported from a different service snapshot and updated from Kafka…
3
votes
1 answer

can i use flink rocksDB state backend with local file system?

I am exploring using Flink rocksDb state backend, the documentation seems to imply i can use a regular file system such as: file:///data/flink/checkpoints, but the code javadoc only mentions hdfs or s3 option here. I am wondering if it's possible to…
fast tooth
  • 2,317
  • 4
  • 25
  • 34