0

I'm trying to forward logs to elastic-search and got stuck with setting the index dynamically (by field in the input data).

My input data format is JSON and always have the key "es_idx". I wish to forward to elasticsearch by that key and add it timestamp, I use logstash_format true to achieve the timestamp feature and logstash_prefix to set the index name other than "fluentd"

This is how my fluentd config looks like:

# fluentd/conf/fluent.conf
<source>
  type stdin
  # Input pattern. It depends on Parser plugin
  format json

  # Optional. default is stdin.events
</source>

<match *.**>
  @type copy
  <store>
    @type stdout
  </store>
  <store>
    @type elasticsearch
    host <es-host>
    port<es-port>
    logstash_format true
    logstash_prefix ${$record["es_idx"]}
    type_name fluentd
    flush_interval 5s
  </store>

</match>

When using the following input {"tenant_id":"test","es_idx":"blabla"}, I'm getting the following error:

2020-05-27 10:38:06 +0300 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="stdin.events" time=2020-05-27 10:37:59.498450000 +0300 record={"tenant_id"=>"test", "es_idx"=>"blabla"}

If I'm setting the logstash_pattern to other string like this: "logstash_pattern blabla" it works fine.

Does anyone have a clue what can be the issue?

mkrieger1
  • 19,194
  • 5
  • 54
  • 65
Shalom Balulu
  • 379
  • 1
  • 9
  • 20

2 Answers2

3

To use dynamic elastic-search you need to use Chunk keys as described here In your case, you may need such configs

<match *.**>
  @type copy
  <store>
    @type stdout
  </store>
  <store>
    @type elasticsearch
    host <es-host>
    port<es-port>
    logstash_format true
    logstash_prefix ${es_idx}
    logstash_dateformat %Y%m%d
    type_name fluentd
    flush_interval 5s

    <buffer es_idx>
      @type file
      path /fluentd/log/elastic-buffer
      flush_thread_count 8
      flush_interval 1s
      chunk_limit_size 32M
      queue_limit_length 4
      flush_mode interval
      retry_max_interval 30
      retry_forever true
    </buffer>
  </store>
</match>

another option is to use elasticsearch_dynamic

<match my.logs.*>
  @type elasticsearch_dynamic
  hosts ${record['host1']}:9200,${record['host2']}:9200
  index_name my_index.${Time.at(time).getutc.strftime(@logstash_dateformat)}
  logstash_prefix ${tag_parts[3]}
  port ${9200+rand(4)}
  index_name ${tag_parts[2]}-${Time.at(time).getutc.strftime(@logstash_dateformat)}
</match>
Al-waleed Shihadeh
  • 2,697
  • 2
  • 8
  • 22
-1

succeed to get the value from the record object like this:

<match *.**>
  @type copy
  <store>
    @type stdout
  </store>
  <store>
    @type elasticsearch
    @log_level debug
    host <host>
    logstash_format true
    logstash_prefix ${es_index_pattern}
    type_name fluentd
    flush_interval 5s
    <buffer tag, es_index_pattern>
      @type memory
    </buffer>
  </store>
</match>
Shalom Balulu
  • 379
  • 1
  • 9
  • 20