1

In our application , flink checkpoint size is increasing and never comesdown with rocks db as statebackend.(AWS KDA)

Kafka-->do some magic -->ES(Sink)-->Writes to kafka

here the keys we use are UUID and are never repeated. How can i configure to make sure that the check point size is not increasing or finetune rocks db to delete any older keys less than 1 day.

Fryder
  • 413
  • 2
  • 7
  • 21

1 Answers1

1

With the DataStream API you can configure state TTL to automatically delete keys after some time interval, or you can manage state expiry manually by using timers in a KeyedProcessFunction.

If you are using the SQL/Table API, then you should configure an idle state retention time.

David Anderson
  • 39,434
  • 4
  • 33
  • 60
  • i am using apache beam as coding sdk..need help how to do this in beam – Fryder Mar 03 '21 at 13:43
  • I believe beam has timers you can use for this purpose. See https://stackoverflow.com/questions/66349543/apache-beam-ttl-in-state-spec, for example. – David Anderson Mar 03 '21 at 13:57