3

Installed an ELK server via: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7

It seems to work except for the filebeat connection; filebeat does not appear to be forwarding anything or at least I can't find anything in the logs to indicate anything is happening.

My filebeat configuration is as follows:

filebeat:
  prospectors:
    -
  paths:
     - /var/log/*.log
     - /var/log/messages
     - /var/log/secure
  encoding: utf-8
  input_type: log
  timeout: 30s
  idle_timeout: 30s
  registry_file: /var/lib/filebeat/registry
output:
  logstash:
    hosts: ["my_elk_fqdn:5044"]
    bulk_max_size: 1024
    compression_level: 3
    worker: 1
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
  to_files: true
  files:
    path: /var/log/filebeat
    name: filebeat.log
    rotateeverybytes: 10485760 # = 10MB
    keepfiles: 7
  level: debug

The log file output I keep getting from filebeat is just not very helpful:

2016-07-14T17:32:21-04:00 DBG  Start next scan
2016-07-14T17:32:31-04:00 DBG  Start next scan
2016-07-14T17:32:41-04:00 DBG  Start next scan
2016-07-14T17:32:46-04:00 DBG  Flushing spooler because of timeout. Events flushed: 0
2016-07-14T17:32:51-04:00 DBG  Start next scan

Is there anything wrong with my configuration file?

When I test on the ELK server to see if I am getting anything:

[root@my_elk_server ~]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

Oh and my logstash configuration for filebeats:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

UPDATE: It is not filebeat. Somewhat relieved that messages are indeed being passed but still have an issue I can't track:

Discovered it wasn't filebeat that was causing the issue. It appears that the configuration file in logstash to send to elasticsearch is not properly labeling the index (and the type) to make it searchable as shown in the question. Instead of putting filebeat in the index name it gives a result like this:

"_index" : "%{[@metadata][beat]}-2016.07.14",

The elasticsearch output put in the file turned out to be incorrect in the

output {
  elasticsearch {
    hosts => "my_elk_fqdn:9200"
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Apparently this @metadata is not being passed in correctly. Has anyone been able to get the _index and _type fields to populate correctly???

This might be a bug with filebeat?? https://github.com/logstash-plugins/logstash-input-beats/issues/6

user3614014
  • 653
  • 1
  • 6
  • 22
  • Have to tried a static representation of the index name? Just to confirm your suspicion – Ed Baker Jul 15 '16 at 01:56
  • I was having the same issue, and I just hardcoded index => "logstash-%{+YYY.MM.dd}" in the 30-elasticsearch-output.conf file. What and where sets the @metadata value? – J21042 Sep 07 '16 at 19:14

0 Answers0