3

I'm new to Logstash, trying to get it running following the tutorial in "The Logstash Book". On page 44, the guide suggests to tail the Logstash process' main log file called central.log. After about 2 minutes of Logstash being started, the following message floods the central.log file:

{:timestamp=>"2014-01-06T02:21:04.098000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>100, :exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s], :backtrace=>  
    ["org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(TransportMasterNodeOperationAction.java:180)", "org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:483)",     
    "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)",
    "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)", 
    "java.lang.Thread.run(Thread.java:724)"], :level=>:warn}

Furthermore, if you execute a CURL command to the Elastic Search server, you receive the following:

[user@server ~]$ curl -XGET 'http://[elastic search host IP]:9200/_search?q=type:syslog&pretty=true'
{
  "error" : "SearchPhaseExecutionException[Failed to execute phase [initial], No indices / shards to search on, requested indices are []]",
  "status" : 503
}

Any ideas on what I possibly could have misconfigured here?

Josh
  • 31
  • 1
  • 2
  • How did you started elasticsearch? Do you use the included version from logstash or standalone? – deagh Jan 07 '14 at 05:57
  • Hey thanks for the response. I used the standalone elastic search. Below are the commands used to install and subsequently start ElasticSearch: wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.3.deb sudo dpkg -i elasticsearch-0.90.3.deb sudo /etc/init.d/elasticsearch start --Josh – Josh Jan 07 '14 at 15:40
  • One more relevant fact is if I use my browser to access Elastic Search, everything looks OK. `{ "ok" : true, "status" : 200, "name" : "smoker", "version" : { "number" : "0.90.3", "build_hash" : "5c38d6076448b899d758f29443329571e2522410", "build_timestamp" : "2013-08-06T13:18:31Z", "build_snapshot" : false, "lucene_version" : "4.4" }, "tagline" : "You Know, for Search" }` – Josh Jan 07 '14 at 15:47
  • If I use the "http" node when I send data to the elasticsearch output it works, changing it back to "node" causes it to fail again. I assume this is because I am running ES 1.4.4 and Logstash is expecting 1.1 – TheFiddlerWins Mar 26 '15 at 13:24

1 Answers1

2

I ran into this exact error last week when following the process outlined in The Logstash Book. My Logstash server log file was also flooded with "Failed to flush outgoing items". What I found was that I had not downloaded the correct version of ElasticSearch. The version of the standalone ElasticSearch must match the version of the embedded ElasticSearch.

Because I had version 1.3.3 of logstash, I was able to find the version number for ElasticSearch here (in my case, I had to use version 0.90.9):

http://logstash.net/docs/1.3.3/outputs/elasticsearch

Then I went to elasticsearch.org, clicked on the download now button, then scrolled down and underneath the download and installation sections there was a link for Past Releases just to the left of the Support for Elasticsearch button.

I am not sure how to determine the versioning of the embedded ElasticSearch if you are using an older version of Logstash.

To summarize, I resolved this very same error by changing which version of the standalone ElasticSearch I was using in my Logstash pipe to match the version of the embedded ElasticSearch.

keldwud
  • 76
  • 6