I have been experimenting with flume ng (flume-ng-1.2.0+24.81-1~lucid) and have been comparing the performance of the memory channel and file channel.
Each event in my test system is 1KB in size and with my current configuration I am able to handle around 30,000 EPS using the memory channel. However, when using the file channel I am only able to handle around 1600 EPS.
On average I expect to receive an average of 2500 EPS on my production system and I would like to use to file channel to provide approximately 1 hours worth of event buffering in the case of sink failure (I am using a HDFS sink with a 1Gbps connection to the hadoop cluster).
This is my file channel configuration:
agent.channels.c1.checkpointDir = ~/.flume/file-channel/checkpoint
agent.channels.c1.dataDirs = ~/.flume/file-channel/data
agent.channels.c1.transactionCapacity = 13107200
agent.channels.c1.checkpointInterval = 30000
agent.channels.c1.maxFileSize = 9216000000
agent.channels.c1.minimumRequiredSpace = 524288000
agent.channels.c1.capacity = 9000000
agent.channels.c1.keep-alive = 3
agent.channels.c1.write-timeout = 3
agent.channels.c1.checkpoint-timeout = 600
agent.channels.c1.use-log-replay-v1 = FALSE
agent.channels.c1.use-fast-replay = FALSE
The batch size for my HDFS sink has been set to 5000.
Can anyone make any recommendations as to how I can improve the performance of my file channel?
Thanks