2

I'm writing a custom application that outputs logs in JSON format, one line per log in the output file. I want to get these logs into both AWS CloudWatch and Splunk for analysis. My question is what is the standard place to put the timestamp in the log line? Should it go before the JSON blob like this:

2016-05-30 10:13:00 { "field1": 6, "field2": "Hello world!", ...}

Or should it go inside the JSON like this:

{"timestamp": "2016-05-30 10:13:00", "field1": 6, "field2": "Hello world!", ...}

Splunk seems to prefer the latter, and if you tell it the log format is _json it seems unhappy with the former and you get errors like this:

05-28-2016 16:29:17.973 +0100 ERROR JsonLineBreaker - JSON StreamId:18154196253238674442 had parsing error:Unexpected character: '-' - data_source=.....

But Cloudwatch appears to want the former, i.e. wants its timestamp as the first thing on the log line, which is fine for a non-JSON log, but apparently not for a JSON log. I've tried searching everywhere for answers, and whilst there are lots of articles on timestamp formatting, nothing seems to cover this JSON related question.

1 Answers1

0

As you wrote, logging a pure JSON object per message is the best way to do it. Tools that do not support this do not support JSON logging very well.

Splunk can read JSON log lines very well. It even parses the message for you and highlights the presentation in the search results.

However, there is no standard way except that a log line is not JSON if it is not parseable as a JSON object.

migu
  • 1,371
  • 1
  • 14
  • 23