0

Assuming I could identify more or less important logs by pattern-matching them, is there a way to configure fluentd (or fluentbit) to do intelligent shedding (discards) when it starts to buffer to much (back-pressure from output)?

Are there other log-processing filters that would do this?

Basically, under low/normal loads I want to pass all the logs, but during overload or spike situation I would like to sacrifice some less important logs in order to preserve the more important logs.

A periodic summary log of discarded count would also be useful but not a strict requirement.

Edit: re-ordering of logs could be a problem, so I would like to find a solution that does not do that.

Gregor
  • 541
  • 3
  • 13

0 Answers0