2

Fallowing the advice https://blog.codecentric.de/en/2014/10/log-management-spring-boot-applications-logstash-elastichsearch-kibana/ I had setup the logstash encoder + logstash forwarder to push everything to my logstash deamon and finally index everything in ElasticSearch.

Here is my configuration:

logstash.xml

<included>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>

    <property name="FILE_LOGSTASH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}/}spring.log}.json"/>
    <appender name="LOGSTASH"
              class="ch.qos.logback.core.rolling.RollingFileAppender">
        <encoder>
            <pattern>${FILE_LOG_PATTERN}</pattern>
        </encoder>
        <file>${FILE_LOGSTASH}</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <fileNamePattern>${FILE_LOGSTASH}.%i</fileNamePattern>
        </rollingPolicy>
        <triggeringPolicy
            class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <MaxFileSize>10MB</MaxFileSize>
        </triggeringPolicy>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerInfo>true</includeCallerInfo>                
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="LOGSTASH"/>
    </root>
</included>

logstash-forwarder.conf

{
    "network": {
        "servers": [
            "logstash:5043"
        ],
        "ssl certificate": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
        "ssl key": "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key",
        "ssl ca": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
        "timeout": 15
    },
    "files": [
        {
            "paths": [
                "${ENV_SERVICE_LOG}/*.log.json"
            ],
            "fields": {
                "type": "${ENV_SERVICE_NAME}"
            }
        }
    ]
}

logstash.conf

input {
    lumberjack {
        port => 5043

        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key"
    }
}

output {
    elasticsearch { host => "localhost" }
}

Everything works fine, the logs are getting saved in the ElasticSearch.

At this point I would like to be able to specify additional fields to be indexed by ElasticSearch, like for instance log level. Searching the @message content for presence of Error or Warn is not so much useful.

How can I do this? Which configuration should I alter to make the level appear as indexed field in ElasticSearch?

Jakub Narloch
  • 300
  • 1
  • 5
  • 14

2 Answers2

1

What you're looking for is a logstash filter, which would be used on your indexer as a peer to the input and output stanzas.

There are a ton of filters (see the doc), but you would use grok{} to apply a regexp to your message field and extract the log level.

You didn't include a sample message, but, given a string like "foo 123 bar", this pattern would extract the "123" into an integer field called loglevel:

grok {
    match => ["message", "foo %{NUMBER:loglevel:int} bar"]
}

There's a decent amount of information on writing grok patterns on the web. Try this one.

Alain Collins
  • 16,268
  • 2
  • 32
  • 55
  • Thanks for pointing this out, turns out that there is a better solution, since that logstash logback encoder put already the logs into JSON format you only need to add json filter filter { json { source => "message" } } This makes your entire JSON log being indexed by all fields that output. – Jakub Narloch Sep 01 '15 at 09:03
0
logstash config file:
input {
  file {
    path =>  [ "/tmp/web.log" ]
  }
}
filter {
    grok {
        match => [ "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:severity} %{GREEDYDATA:message}" ]
       }
    }
output {
    elasticsearch {
        host => "127.0.0.1"
        index => "web-%{+YYYY.MM.dd}"
    }
}

You can use 'add_tag' or 'add_field' to specify a special filed

bbotte
  • 168
  • 5