I’m relatively new to Kibana and the ELK (Elasticsearch, Logstash and Kibana) stack and I’ve been doing pretty well setting one up, but I have run into what I see as an odd issue and need some help understanding what’s happening.
I’m using the ELK stack to crunch some Apache logs but I have my own custom type settings. So I need to explicitly specify field types and such instead of having Logstash (or is it Kibana?) guess what the data mapping would be.
From reading the Logstash documentation, it seems pretty clear that I can set the template
value in the output.elasticsearch
chunk of config shown here:
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-apache"
document_id => "%{[@metadata][fingerprint]}"
manage_template => false
template => "/path/to/logstash/logstash-apache.json"
template_name => "logstash-apache"
template_overwrite => true
}
stdout {
codec => rubydebug
}
}
100% sure I have the correct path set. But for some reason, if I use this, launch Logstash and let it do it’s things, the mappings I have specified in logstash-apache.json
don’t show up. The index in Kibana is logstash-apache
as well so this should work right?
So what I do now is preload the mappings template directly into Elasticsearch like this:
curl -ss -XPUT "http://localhost:9200/_template/logstash-apache/" -H 'Content-Type: application/json' -d @"/path/to/logstash/logstash-apache.json";
And it clearly works well and the data gets proper mapping… But doing something like this is fairly clunky. It would be cleaner to just have it all come from the logstash-apache.conf
file I have setup.
So what am I doing wrong? What can I do to have my custom mappings template be used via that logstash-apache.conf
without having to jump through the extra hoop of a curl
command?