1

I have a log file with a bunch of lines of json data. For example, here is one line:

{"name":"sampleApplicationName","hostname":"sampleHostName","pid":000000,"AppModule":"sampleAppModuleName","msg":"testMessage","time":"2016-02-23T19:33:10.468Z","v":0}

I want logstash to be able to break up these different components of the json string so that I can create visualizations in Kibana based off these components. I have tried playing around with the indexer file and tries countless variations, using both the json filter and grok patterns but I can't get anything to work. Any help is much appreciated.

FAhmed
  • 27
  • 2
  • Stat with the file{} input with the json codec. If that doesn't work, post what you've done and ask for help in fixing it. – Alain Collins Apr 15 '16 at 19:00

1 Answers1

0

Below is an exampke config that I use. Try pasting your json line to the command prompt to validate that it is working fine.

input {
  stdin {}
}

filter {
    json {
        source => "message"
    }

    mutate {
        add_field => {
            "[@metadata][tenant-id]" => "%{[tenant-id]}"
            "[@metadata][data-type]" => "%{[data-type]}"
            "[@metadata][data-id]" => "%{[data-id]}"
        }
    }

    if [data-type] == "build" {
        mutate {
            add_field => { "[@metadata][action]" => "index" }
        }
    }
}


output {
    stdout { codec  => rubydebug { metadata => true } }

    file { path => "/tmp/jenkins-logstash.log" }

    elasticsearch {
        action => "%{[@metadata][action]}"
        hosts => "XXX:9200"
        index => "tenant-%{[@metadata][tenant-id]}"
        document_type => "builds"
        document_id => "%{[@metadata][data-id]}"
        workers => 1
    }
}
Eyal.Dahari
  • 760
  • 6
  • 13