1

I've been trying to write new config for my fluentbit for a few days and I can't figure out how to write it with best performance result. Is there a better way to send many logs (multiline, cca 20 000/s-40 000/s,only memory conf) to two outputs based on labels in kubernetes? In k8s we have label which says if

  1. the logs needs to be send to redis,
  2. forward to disc or
  3. both, but under heavy load fluentbit still throws mem buf overlimit. I can't increase the limit because of resources - so I need to configure fluentbit the best I can (and next investigate other components but they are not looking that busy..).

Can someone please check my config if there is anything I can improve?

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-fluentd-configmap
  namespace: logging
  labels:
    k8s-app: fluent-bit
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Parsers_File  parsers.conf
        Daemon        off
        HTTP_Server  On
        HTTP_Listen  0.0.0.0
        HTTP_PORT    2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filters.conf
    @INCLUDE output.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.<namespace_name>.<container_name>.<pod_name>.<docker_id>-
        Tag_Regex         (?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$
        Path              /var/log/containers/*.log
        Path_Key          path_filename
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     50MB
        Buffer_Max_Size   8MB
        Refresh_Interval  30
        Skip_Long_Lines   On
        Docker_Mode        On
        Docker_Mode_Flush  4
        Docker_Mode_Parser multi_line
  filters.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_Tag_Prefix     kube.
        Regex_Parser        k8s-custom-tag
        Merge_Log           On
        K8S-Logging.Parser  On
        K8S-Logging.Exclude On
    [FILTER]
        Name         nest
        Match        kube.*
        Operation    lift
        Nested_under kubernetes
    [FILTER]
        Name         nest
        Match        kube.*
        Operation    lift
        Nested_under labels
    [FILTER]
        Name         modify
        Match        kube.*
        Condition    Key_value_matches output_label /(disc|redis)/
        Set          keep true           
    [FILTER]
        Name         rewrite_tag
        Match        kube.*
        Rule         $output_label ^(disc) disc.$TAG true
        Emitter_Mem_Buf_Limit 20M 
    [FILTER]
        Name         rewrite_tag
        Match        kube.*
        Rule         $output_label (redis)$ redis.$TAG false
        Emitter_Mem_Buf_Limit 20M 
    [FILTER]
        Name         grep
        Match        *
        Regex        keep true

  output.conf: |
    [OUTPUT]
        Name         forward
        Match        disc.*
        Host         IP_address_of_fluentd
        Port         9880
        Retry_Limit  5
    [OUTPUT]
        Name         redis
        Match        redis.*
        Hosts        IP_address_of_redis
        Key          k8s

  parsers.conf: |
    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
        Decode_Field_As   escaped_utf8    log    do_next
        Decode_Field_As   json       log
        Reserve_Data On
        Preserve_Key On
    [PARSER]
        Name        k8s-custom-tag
        Format      regex
        Regex       ^(?<namespace_name>[^_]+)\.(?<container_name>.+)\.(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)\.(?<docker_id>[a-z0-9]{64})-$
    [PARSER]
        Name        multi_line
        Format      regex
        Regex       (?<log>^{"log":"\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}[\.,]\d{3} \(.*)

Any help appreciated :)

Jane
  • 63
  • 1
  • 6

0 Answers0