1

I have deployed Bitnami EFK stack on K8s environment:

  repository: bitnami/fluentd
  tag: 1.12.1-debian-10-r0

Currently, one of the modules/applications inside my namespaces are configured to generate JSON logs. I see logs in Kibana as JSON format.

But there is the issue of splitting/truncating logs after 16385 characters, and I cannot see full logs trace. I have tested some of the concat plugins but they don't give the expected results so far. or maybe I did the wrong implementation of Plugins.

fluentd-inputs.conf: |
      # Get the logs from the containers running in the node
      <source>
        @type tail
        path /var/log/containers/*.log
        tag kubernetes.*
        <parse>
          @type json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </parse>
      </source>
      # enrich with kubernetes metadata
      <filter kubernetes.**>
        @type kubernetes_metadata
      </filter>
      <filter kubernetes.**>
        @type parser
        key_name log
        reserve_data true
        <parse>
          @type json
        </parse>
      </filter>
      <filter kubernetes.**>
        @type concat
        key log
        stream_identity_key @timestamp
        #multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d+ .*/
        multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
        flush_interval 5
      </filter>

    fluentd-output.conf: |
      <match **>
        @type forward
         # Elasticsearch forward
        <buffer>
          @type file
          path /opt/bitnami/fluentd/logs/buffers/logs.buffer
          total_limit_size 1024MB
          chunk_limit_size 16MB
          flush_mode interval
          retry_type exponential_backoff
          retry_timeout 30m
          retry_max_interval 30
          overflow_action drop_oldest_chunk
          flush_thread_count 2
          flush_interval 5s
          flush_thread_count 2
          flush_interval 5s
        </buffer>
      </match>
      {{- else }}
      # Send the logs to the standard output
      <match **>
        @type stdout
      </match>
      {{- end }}

I am not sure but a reason could be that inside fluentd configuration, some Plugins are already used to filter JSON data, and maybe there is a different way to use a new concat plugin. ? or it can be configured in a different way. ? https://github.com/fluent-plugins-nursery/fluent-plugin-concat

Can anyone of you please support? Thanks

kishorK
  • 453
  • 2
  • 7
  • 16
  • Where do you see the truncated JSON i.e. `stdout` and `elasticsearch` both? – Azeem May 03 '21 at 09:46
  • Hi Azeem, I see in both! – kishorK May 07 '21 at 06:38
  • Hi! Can you try and test without filters and `stdout` only? Also, are there any relevant errors in the fluentd logs? – Azeem May 07 '21 at 10:06
  • Hi,, I see logs in stdout.--->> tag="kubernetes.var.log.containers.authserver-ma-entersekt-86c687f778-jbm6z_dbh-v1-dev_authserver-ma-entersekt-7ee27fb6c10f78a6d31c5863b168359141ab242bb5636109774276b6872b3ee9.log" time=2021-05-09 10:24:25.196245905 +0000 record={"log"=>"2021-05-09 12:24:25,195 [,] DEBUG header.writers.HstsHeaderWriter (HstsHeaderWriter.java:129) - Not injecting HSTS header since it did not match the requestMatcher org.springframework.security.web.header.writers.HstsHeaderWriter$SecureRequestMatcher@486a233c\n", "stream"=>"stdout", "docker"=>{"container_id"=>"7ee27fb6c10f78a6 – kishorK May 09 '21 at 10:28

0 Answers0