I have my NGINX logs formated as JSON:
log_format le_json '{ "@timestamp": "$time_iso8601", '
'"remote_addr": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"status": $status, '
'"request": "$request", '
'"request_method": "$request_method", '
'"response_time": $upstream_response_time, '
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';
My log gets picked up by filebeat and sent to Logstash that have the following config:
input {
beats {
port => 5044
codec => "json"
}
}
filter {
geoip {
database => "C:/GeoLiteCity.dat"
source => "[remote_addr]"
}
}
output {
elasticsearch {
template => "C:/ELK/logstash-2.2.2/templates/elasticsearch-template.json"
template_overwrite => true
hosts => ["127.0.0.1"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
The problem i'm having is $upstream_response_time. When there is no response time NGINX puts an '-' on this post. As you can see i don't put "" around $upstream_response_time because i want it as a number so i can perform calculations with this in Kibana and display. When '-' is sent i get a jsonparsefailure in Logstash because it is not a number.
I would like to set all the '-' to 0. What would be the best way to do this? I've had no success with trying to filter it in nginx-config. I think it needs to be done prior to getting shipped to Logstash because that's where the parsefailure occurs.
Any ideas?