0

I set up an elk stack to consume log files locally; now I am trying to add filebeat which will output to logstash for filtering before being indexed into elasticsearch. here is my configuration filebeat.yml:

prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
  paths:
    - /var/samplelogs/wwwlogs/framework*.log
  input_type: log
  document_type: framework
logstash:
   # The Logstash hosts
   hosts: ["localhost:5044"]
logging:
   to_syslog: true

here is the logstash configuration:

input {
  beats {
    port => 5044
  }
}
filter {
  if [type] == "framework" {
    grok {
      patterns_dir => "/etc/logstash/conf.d/patterns"
        match => {'message' => "\[%{WR_DATE:logtime}\] \[error\] \[app %{WORD:application}\] \[client %{IP:client}\] \[host %{HOSTNAME:host}\] \[uri %{URIPATH:resource}\] %{GREEDYDATA:error_message}"}
    }
    date {
      locale => "en"
        match => [ "logtime", "EEE MMM dd HH:mm:ss yyyy" ]
    }
  }
 }
 output {
  elasticsearch {
    host => "localhost"
    port => "9200"
    protocol => "http"
    # manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

this logstash configuration checks out okay when I use --configtest. filebeat starts up okay but I am getting the following errors in the logstash.log :

    {:timestamp=>"2016-03-09T12:26:58.976000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:26:58-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:03.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:03-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:08.060000-0700", :message=>"Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];", :level=>:error}
{:timestamp=>"2016-03-09T12:27:08.060000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"Java::OrgElasticsearchClusterBlock::ClusterBlockException", :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:08.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:08-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:13.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:13-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}

these errors keep repeating over and over.
in the elasticsearch log there is an error illegalargumentexception: empty text. I tried changing the protocol in the logstash output configuration to "node".
it looks to me like elasticsearch cannot be reached but it is running:

$ curl localhost:9200
{
  "status" : 200,
  "name" : "Thena",
  "version" : {
    "number" : "1.1.2",
    "build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
    "build_timestamp" : "2014-05-22T12:27:39Z",
    "build_snapshot" : false,
    "lucene_version" : "4.7"
  },
  "tagline" : "You Know, for Search"
}

this is my first time trying logstash. can anyone point me in the right direction?

trad
  • 7,581
  • 4
  • 20
  • 17
  • "SERVICE_UNAVAILABLE/1/state not recovered" means your cluster isn't happy. Check it, and then search around for more info on what you find: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html – Alain Collins Mar 09 '16 at 20:00
  • Not sure if the LS error and Filebeat are related. The filebeat config output section looks wrong. You specified logstash output on the top level, but it should be nested under output: https://github.com/elastic/beats/blob/1.2/filebeat/etc/filebeat.yml#L180 – ruflin Mar 10 '16 at 11:52

1 Answers1

0

I was able to get my stack working. everyone's comments were on point but in this case it happened to be a configuration adjustment that I still don't fully understand.
in the log stash output configuration, within the elasticsearch{} options, I commented out the port and protocol (set to 9200 and HTTP) and it worked. my first attempt at a fix was to remove the protocol option and thus use the node protocol by default. when that didn't work I also removed the protocol option. the default for protocol is 'node' so it appears that I simply couldn't get it working over HTTP, and I forgot to remove the port option. after removing both it worked.
this probably won't help people in the future, but if you are going to use node protocol, make sure you don't forget to remove the port option from the configuration -- at least that's what I think I ran into here.

trad
  • 7,581
  • 4
  • 20
  • 17