I am doing data replication in alpakka using Consumer.commitableSource. But, the size of kafka log file is increases very quickly. The size reaches 5 gb in a day. As a solution of this problem, ı want to delete processed data immediately. I am using delete record method in AdminClient to delete offset. But when I look at the log file, data corresponding to that offset is not deleted.
Asked
Active
Viewed 144 times
0
-
1The Kafka LogCleaner thread is synchronous and periodic. You typically shouldn't be pin-pointing an offset to delete, rather setting retention sizes of bytes or hours/days/milliseconds – OneCricketeer Oct 22 '18 at 06:12
1 Answers
1
When using commitableSource
you need to acknowledge that the record has been successfully read and is ready to be cleaned by committing the offset. You can do that by calling commitJavadsl()
. Take a look at the example in the documentation for more information.

dvim
- 2,223
- 1
- 17
- 17