I'm reading about OutBox pattern implementations which create records in a table and then a debezium connector read the bin-log to publish those changes to Kafka. This open an issue that after the record was added (and written to the bin log) it just taking storage and the table can get really big. There are several approaches on cleaning the old records like partitions drop by date, create followed by delete or DbTriggers to delete the records)
My suggestion is that, I will create in advance 1,000,000 records in this table and every time will just update one record randomly. The debezium functionality will remain and I will avoid the need to delete old records.
Except of paying a constant storage space for those 1M records. Is there other reason to avoid such approach?