0

setting up ELK is very easy until you hit the logstash filter. I have a log delimited 10 fields. I may have some field blank but I am sure there will be 10 fields:

7/5/2015 10:10:18 AM|KDCVISH01|
|ClassNameUnavailable:MethodNameUnavailable|CustomerView|xwz261|ef315792-5c41-4bdf-aa66-73317e82e4d6|52|6182d1a1-7916-4874-995b-bc9a23437dab|<Exception>
afkh akla  487234 &*<Exception>

Q: 1- I am confused how grok or regex pattern will pick only the field that I am looking and not the similar match from another field. For example, what is the guarantee that DATESTAMP pattern picks only the first value and not the timestamp present in the last field (buried in stack trace)?

2- Is there a way to define positional mapping? For example, 1st fiels is dateTime, 2nd is machine name, 3rd is class name and so on. This will make sure I have fields displayed in Kibana no matter the field value is present or not.

backtrack
  • 7,996
  • 5
  • 52
  • 99
Vish
  • 155
  • 2
  • 4
  • 16

1 Answers1

0

I know i am little late, But here is a simple solution which i am using,

replace your | with space

option 1:

filter {
    mutate {
            gsub => ["message","\|"," "]
    }

    grok {
            match => ["message","%{DATESTAMP:time} %{WORD:MESSAGE1} %{WORD:EXCEPTION} %{WORD:MESSAGE2}"]
    }
}

option 2: excepting |

filter {


        grok {
                match => ["message","%{DATESTAMP:time}\|%{WORD:MESSAGE1}\|%{WORD:EXCEPTION}\|%{WORD:MESSAGE2}"]
        }
    }

it is working fine : http://grokdebug.herokuapp.com/. check here.

backtrack
  • 7,996
  • 5
  • 52
  • 99