0

I have a system using several docker images logging into a syslog-ng. Syslog-ng is configured to write into files all the streams camming from the other containers. This part works well and I am getting logs like that:

2016-01-04T20:28:38+03:00 197.23.42.1 1 2016-01-04T14:28:38.197-03:00 adad20179cfb server-zuul - Audit - Mapped URL path [/micro-sacca-movimientos/**] onto handler of type [class org.springframework.cloud.netflix.zuul.web.ZuulController]
2016-01-04T20:30:29+03:00 197.23.42.1 1 2016-01-04T14:30:29.725-03:00 47dabf38eb34 server-zuul - Audit - Mapped URL path [/micro-sacca-movimientos/**] onto handler of type [class org.springframework.cloud.netflix.zuul.web.ZuulController]
2016-01-04T20:33:24+03:00 197.23.42.1 1 2016-01-04T14:33:24.447-03:00 47dabf38eb34 server-zuul - Audit - Flipping property: micro-sacca-movimientos.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2016-01-04T20:33:24+03:00 197.23.42.1 1 2016-01-04T14:33:24.455-03:00 47dabf38eb34 server-zuul - Audit - Client:micro-sacca instantiated a LoadBalancer:DynamicServerListLoadBalancer:{NFLoadBalancer:name=micro-sacca-movimientos,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null

Then I try to configure thath image:

https://hub.docker.com/r/willdurand/elk/

I mapped the logs path and setted this config for logstash:

input { 
    file {
        path => ["/var/log/syslog-ng/20160104/*.log"]
        start_position => "beginning"
    }
}

Then I startd the image and entered to Kibana 4 interface. I tried with patterns like:

YYYY.MM.DD and YYYY-MM-DD but I never be able to create the index to start using Kibana.

What am I doing wrong with the index pattern? Or I missplace some docker image configuration?

Rys
  • 4,934
  • 8
  • 21
  • 37
  • 1
    You skipped the middle part - is logstash receiving, processing, and inserting the event data into elasticsearch? (I'm guessing not). – Alain Collins Jan 10 '16 at 04:14

1 Answers1

1

It works with thath config:

input { 
    file {
        type => "syslog"
        path => ["/var/log/syslog-ng/**/*.log"]
        start_position => "beginning"
    }
}

filter {
    grok  {
        match => [ "message", "%{CISCOTIMESTAMP} %{IP:ip} 1 %{MCOLLECTIVEAUDIT}%{ISO8601_SECOND}%{ISO8601_TIMEZONE} %{WORD:contenedor} %{USERNAME:servicio} - Audit - %{UUID:idTx} %{WORD:codigoErr} %{GREEDYDATA:data}"]
    }   
}

output {
    elasticsearch {
        host => "127.0.0.1"
        cluster => "logstash"
  }
}
Rys
  • 4,934
  • 8
  • 21
  • 37