2

I am sending logs from my nestjs project to elastic search using fluent-bit. However, I get the following error periodically:

[2022/06/14 21:43:18] [ warn] [engine] failed to flush chunk '1-1654871535.433259986.flb', retry in 858 seconds: task_id=18, input=forward.0 > output=es.1 (out_id=1)
[2022/06/14 21:43:19] [error] [output:es:es.1] error: Output
{"took":14,"errors":true,"items":[{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"p-ErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263576,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"qOErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263577,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"qeErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263578,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"quErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263579,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"q-ErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263580,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"rOErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263581,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"reErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263582,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"ruErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263583,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"r-ErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263584,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"sOErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263585,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"seErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263586,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"suErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263587,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"s-ErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263588,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"tOErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263589,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"teErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263590,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"tuErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263591,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"t-ErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263592,"_primary_term":1,"status":201}},{"create":{"_index":"docker_log-2022.06.13","_type":"docker","_id":"uOErZIEBcmXNgaVYxrKQ","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":3263593,"_primary_term":1,"status":201}},{"create":{"

Am I missing something here? the log can be sent to the ES correctly, but when I check the fluent-bit container, it always gets the error.

Here is the fluent-bit config:

[SERVICE]
    Flush       1
    Daemon      off
    Log_level   info
    Parsers_File  parsers.conf
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020
[INPUT]
    Name forward
    Listen 0.0.0.0
    Port 24224
[INPUT]
    name cpu
    tag metrics_cpu
[INPUT]
    name disk
    tag metrics_disk
[INPUT]
    name mem
    tag metrics_memory
[INPUT]
    name netif
    tag metrics_netif
    interface  eth0
[FILTER]
    Name         parser
    Match        docker_logs
    Key_Name     log
    Parser       escape_utf8_log
    Reserve_Data True
[FILTER]
    Name         parser
    Match        docker_logs
    Key_Name     message
    Parser       escape_message
    Reserve_Data True
[FILTER]
    Name parser
    Match docker_logs
    Key_Name log
    Parser docker
    Reserve_Data True
[OUTPUT]
    Name es
    Match metrics_*
    Host 127.0.0.1
    Port 9200
    Index docker_40_metrics
[OUTPUT]
    Name es
    Trace_Error On
    Match docker_logs
    Host 127.0.0.1
    Port 9200
    Index fluentbit
    Type docker
    Logstash_Format On
    Logstash_Prefix docker_log
    Retry_Limit     5

parsers.conf:

[PARSER]
    Name        syslog-rfc5424
    Format      regex
    Regex       ^\<(?<pri>[0-9]{1,5})\>1 (?<time>[^ ]+) (?<host>[^ ]+) (?<ident>[^ ]+) (?<pid>[-0-9]+) (?<msgid>[^ ]+) (?<extradata>(\[(.*?)\]|-)) (?<message>.+)$
[PARSER]
    Name        web-log
    Format      regex
    Regex       (?<host>[^ ]*) [^ ]* "(?<user>[^\ ]*)\" \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<status_code>[^ ]*) "?(?<size>[^ "]*)"? (?<origin>[^ ]*) [\w\.]+=(?<elapsed_usec>[^ ]*)
    Time_Key    time
    Time_Format %Y-%m-%d %H:%M:%S %z
    Time_Keep   On
[PARSER]
    Name   apache2
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name   apache_error
    Format regex
    Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]
    Name   nginx
    Format regex
    Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name   json
    Format json
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%d %H:%M:%S.%L
    Time_Keep   On
[PARSER]
    Name        syslog
    Format      regex
    Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
    Time_Key    time
    Time_Format %b %d %H:%M:%S
[PARSER]
    Name   escape_utf8_log
    Format json
    # Command      | Decoder     | Field | Optional Action
    # =============|=====================|=================
    Decode_Field_As  escaped_utf8   log     try_next
    Decode_Field         json       log     try_next
[PARSER]
    Name   escape_message
    Format json
    # Command      | Decoder | Field | Optional Action
    # =============|=================|=================
    Decode_Field_As  escaped_utf8   message   try_next
    Decode_Field         json       message   try_next
Erika
  • 453
  • 8
  • 23

1 Answers1

2

This happened to me. I was able to see further error logs when I activated the Trace_Error parameter and then displayed the fluent-bit logs:

[OUTPUT]
  Name              es
  Match             **
  Host              my_es_host
  Port              my_es_port
  Index             my-index
  Trace_Error       On
  Trace_Output      On

In my case, I got the following error message:

[2023/03/20 19:18:24] [error] [output:es:es.0] error: Output
{"took":22,"errors":true,"items":[{"create":{"_index":"my-index","_type":"_doc","_id":null,"status":404,"error":{"type":"index_not_found_exception","reason":"no such index [my-index] and [action.auto_create_index] ([+.*]) doesn't match","index_uuid":"_na_","index":"my-index"}}}]}

It turns out that our Elasticsearch admin has prevented us from creating the index if it does not exist. So I need to use an existing index name in the Fluent-bit configuration.