I am a beginner to kafka
We are looking for sizing our kafka cluster(a 5 node cluster) for processing 17,000 events/sec with each event at the size of 600bytes. We are planning a replication of 3 and retention of events for a week
I read in the kafka documentation page
assuming you want to be able to buffer for 30 seconds and
compute your memory need as write_throughput*30.
so what is this write throughout? if its is number of MB per second - I am looking at 9960MB/Sec
if consider that as my write throughput then the memory calculates as 292GB(9960MB/Sec * 30 )
so what is the 292GB represent Memory requirement for one node or the entire cluster(5 nodes)
I would really appreciate some insights on the sizing of memory and disk.
Regards VB