1

I used Bitnami helm charts for Fluentd and Official elk helm charts for Elasticsearch and Kibana to deploy EFK stack for log collection in my Kubernetes cluster. But now I'm getting "[warn]: #0 pattern not matched" at the begging (after timestamp) of every log in fluentd-forwarder pods. This is a sample log from one of fluentd-forwarder pods:

2023-08-28 13:14:20 +0000 [warn]: #0 pattern not matched: "2023-08-28T16:44:20.255191355+03:30 stderr F I0828 13:14:20.254863 1 handler.go:232] Adding GroupVersion crd.projectcalico.org v1 to ResourceManager"

and this is Bitnami's deafult Configmap for fluentd-forwarder config:

      # HTTP input for the liveness and readiness probes
      <source>
        @type http
        port 9880
      </source>
      # Get the logs from the containers running in the node
      <source>
        @type tail
        path /var/log/containers/*.log
        # exclude Fluentd logs
        exclude_path /var/log/containers/*fluentd*.log
        pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
        tag kubernetes.*
        read_from_head true
        <parse>
          @type json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </parse>
      </source>
      # enrich with kubernetes metadata
      {{- if or .Values.forwarder.serviceAccount.create .Values.forwarder.serviceAccount.name }}
      <filter kubernetes.**>
        @type kubernetes_metadata
      </filter>
      {{- end }}

I've tried deleting the whole parse block or replacing it with "format json". Also did the below config:

        <parse>
          @type json
        </parse>

But nothing changed and the issue still persists.

By the way I'm using Containerd as my container runtime.

0 Answers0