I'm new to Logstash, trying to get it running following the tutorial in "The Logstash Book". On page 44, the guide suggests to tail the Logstash process' main log file called central.log. After about 2 minutes of Logstash being started, the following message floods the central.log file:
{:timestamp=>"2014-01-06T02:21:04.098000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>100, :exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s], :backtrace=>
["org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(TransportMasterNodeOperationAction.java:180)", "org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:483)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)",
"java.lang.Thread.run(Thread.java:724)"], :level=>:warn}
Furthermore, if you execute a CURL command to the Elastic Search server, you receive the following:
[user@server ~]$ curl -XGET 'http://[elastic search host IP]:9200/_search?q=type:syslog&pretty=true'
{
"error" : "SearchPhaseExecutionException[Failed to execute phase [initial], No indices / shards to search on, requested indices are []]",
"status" : 503
}
Any ideas on what I possibly could have misconfigured here?