0

Can someone help me about this issue https://github.com/fluent/fluentd/issues/3626

Anup
  • 81
  • 1
  • 14

1 Answers1

0

I am facing same issue, configuration for plugin is as follows :

<buffer>
        @type file
        path /fluentd/log/buffer/coralogix
        queue_limit_length 4
        flush_thread_count 8 
        flush_mode interval
        flush_interval 3s
        total_limit_size 5000MB
        chunk_limit_size 8MB 
        retry_max_interval 30
        overflow_action throw_exception
      </buffer>

Input Source : Kafka Topic and logs are shipped to coralogix. I am still facing,

lib/ruby/gems/2.7.0/gems/fluentd-1.11.4/lib/fluent/plugin/buffer.rb:290:in `write'" tag="fluentd-logs"
2022-11-17 10:48:55 +0000 [error]: #0 ignore emit error in object:60f7c error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data"
2022-11-17 10:48:55 +0000 [warn]: #0 failed to write data into buffer by buffer overflow action=:throw_exception
2022-11-17 10:48:55 +0000 [warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/usr/lib/ruby/gems/2.7.0/gems/fluentd-1.11.4/lib/fluent/plugin/buffer.rb:290:in `write'" tag="fluent.error"

Should I consider parallel chunks flush?

Thanks.

buttercup
  • 105
  • 1
  • 12
  • 1
    Please use below buffer config ' @type file flush_mode interval flush_thread_count 16 path /var/log/fluentd-buffers/k8sapp.buffer chunk_limit_size 48MB queue_limit_length 512 flush_interval 5s overflow_action drop_oldest_chunk retry_max_interval 30s retry_forever false retry_type exponential_backoff retry_timeout 1h retry_wait 20s retry_max_times 30 ` – Anup Nov 22 '22 at 11:08
  • Let me know if you are still facing the same issue. – Anup Nov 22 '22 at 15:33