0

I would like to ask regarding to fluentd.

My fluentd versions is below.

td-agent-2.1.5-0.x86_64
fluentd 0.10.61

I now have a tail input plugin using multiple line format which parses multiple lines from log and set as a single record as below.

2016-07-31T14:48:06+09:00       arm       {"val1":"15:49:18.602384","val2":"5009","val3":"4896","val4":"3905","val5":"1811","val6":"10287","val7":"10271","val8":"1509","val9":"11064","val10":"10832","val11":"10673","val12":"9553","val13":"10660","val14":"9542","val15":"15:49:18.602509","val16":"3759","val17":"4758","val18":"2930","val19":"1261","val20":"7761","val21":"7767","val22":"1023","val23":"7905","val24":"7711","val25":"7918","val26":"7292","val27":"7940","val28":"6907"}

I will need to split all the fields from 1 record to 28 records for elasticsearch to recognize as different documents.

Like ,

val1

val2

val3

...

val28

Is there any way to achieve this in fluentd configuration ? Perhaps, embed ruby code?

Best Regards, Yu Watanabe

Yu Watanabe
  • 616
  • 2
  • 8
  • 18

1 Answers1

0

You need to provide a Regex to parse the fields separately and set the json part of the log message as the field message and the timestamp should be stored in the field time or @timestamp and it should work as you expect, where ElasticSearch interprets the json payload automatically.

dutzu
  • 141
  • 2