I am new to ELK stack. I am trying to analyze and process java stack logs from my application to get useful information. In my case after starting the logstash, I can see my data under Discover tab on Kibana portal, but as soon I am trying to create a visualization I am not getting all fields there. Also the data is inconsistent. Can someone please clarify what is wrong in my case. Below are my snippets.
Sample log file:
Thread #1: t@-1824503488, lwp=10848, ref=0xb8535a0, session=84521D86BBE4D71690E2A6EF29A489BE:mx135721697661ec76bb:(WebServiceFacade.java:84), ms=0xa0819a40
<Start Stack Trace>
<1 - ADK Verbose Trace Entry>
stateless dispatch for invokeClass.bosInterface executing
Active: 18 minutes 50.00 seconds
User:
User1
Tenant:
Session:
84521D86BBE4D71690E2A6EF29A489BE:mx135721697661ec76bb:(WebServiceFacade.java:84)
Parameters:
bosContext _cntx:
user:
User Ag
depth:
3
session id:
84521D86BBE4D71690E2A6EF29A489BE:mx135721697661ec76bb:(WebServiceFacade.java:84)
bosUTF _className:
XXXXXProcess
bosStringList _construct:
2 entries
$$MXRIP$$|java.util.HashMap
10
bosUTF _methodName:
YYYYYYCopies
bosStringList _params:
2 entries
$$MXRIP$$|java.util.HashMap
9
logstash.conf file
input {
file {
path => "C:/Temp/BOHLogs/SampleJava.log"
start_position => "beginning"
sincedb_path => "NUL"
codec => multiline {
pattern => "^Thread%{SPACE}%{GREEDYDATA}"
negate => "true"
what => "previous"
}
}
}
filter {
grok {
match => [ "message", "%{NOTSPACE:Thread}:\s%{GREEDYDATA:Data}" ]
}
kv {
source => "Data"
field_split => " "
value_split => "="
}
kv {
source => "Data"
field_split => "\n"
value_split => ":"
}
date {
match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS Z", "MMM dd, yyyy HH:mm:ss a"]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "dummya_log"
}
stdout {codec => rubydebug}
}
Output in logstash window Available fields on logstash window