I am using ELK with logstash-logback-encoder for pushing logs to the Logstash. Now I want to use the same stack i.e ELK with logstash-logback-encoder for analytics.
Flow:
API(Create User)----> Commit data to RDBMS ----->
Callback Listener(on post persist and post update) --->
Logger.info("IndexName: {} . DocId: {} .User json: {}", "Customer", user.getID(), user.getJson());
Logger.info(); logstash-logback-encoder, will push the data to Logstash, which will push the data to ES.
My logstash.conf is as below:
input {
tcp {
port => 5044
codec => multiline {
what => "previous"
}
}
}
filter{
grok {
match => ["message", "(?<index_name>(?<=IndexName: ).*?(?=\s))"]
match => ["message", "(?<doc_id>(?<=DocId: ).*?(?=\s))"]
break_on_match => false
remove_tag => ["_grokparsefailure","multiline"]
}
mutate {
gsub => ['message', "\t", " "]
gsub => ['message',"\e\[(\d*;)*(\d*)m"," "]
}
}
output {
if [index_name] == "Customer" {
elasticsearch {
hosts => ["localhost:9200"]
index => "analytics-customers"
document_id => "%{doc_id}"
}
}else {
elasticsearch {
hosts => ["localhost:9200"]
}
}
stdout { codec => rubydebug }
}
My problem is that if I want to use Logstash for analytics then I have to parse the json using grok. With the amount of table and fields that I have, the logstash.conf will became really huge.
Is there a way by which I can apply grok templates in logstash.conf which I can invoke on the basis of index name. Like:
grok {
match => ["message", "(?<index_name>(?<=IndexName: ).*?(?=\s))"]
if(index_name=="User"){
//Invoke User template which will fetch/create fields from passed json.
}
if(index_name=="Order"){
//Invoke Order template which will fetch/create fields from passed json.
}
}