4

I use filebeat to write logs to an elasticsearch server. My logs are in json format. Every line is a json string that looks like this

{"@timestamp": "2017-04-11T07:52:480,230", "user_id": "1", "delay": 12}

I want the @timestamp field from my logs to replace the @timestamp field that filebeat creates when reading the logs. On my kibana dashboard I always get

json_error:@timestamp not overwritten (parse error on 2017-04-11T07:52:48,230)

and end up seeing the @timestamp field created by filebeat

My filebeat conf includes those lines regarding overwriting fields

json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true

Also from my log4j conf the @timestamp field created in my logs is in ISO8601 format. Any idea what the problem is and why the @timestamp field is not overwritten?

LetsPlayYahtzee
  • 7,161
  • 12
  • 41
  • 65

1 Answers1

4

The problem was the format of the timestamp that log4j is producing. Filebeat expects something of the form "2017-04-11T09:38:33.365Z" it has to have to T in the middle the Z in the end and dot instead of comma before the milliseconds.

Quickest (and somewhat dirty) way I found to do that was by using the following pattern

pattern='{"@timestamp": "%d{YYYY-MM-dd}T%d{HH:mm:ss.SSS}Z"}

A similar issue can be found here. The suggested solution does not solve the filebeat issue though because it uses comma!

LetsPlayYahtzee
  • 7,161
  • 12
  • 41
  • 65