0

I followed this tutorial from Digital Ocean on how to install an ELK stack on a CentOS 7 machine.

Digital Ocean ELK Setup CentOS

It seemed pretty good, and got me as far as having an initial Elastic Search node working correctly and have kibana 4 running behind NGINX. But in installing Logstash I ran into an issue where it doesn't seem to create any indexes in elasticsearch!! I'm sure it's a config issue somewhere. But where I don't know!

I notice that when I cat out the elasticsearch indexes using the _cat API after restarting logstash, logstash hasn't created any indexes.

curl http://localhost:9200/_cat/indices
yellow open .kibana  1 1 1 0 2.4kb 2.4kb
yellow open security 5 1 0 0  575b  575b

Here we have kibana's index and what I think is a standard ES index called 'security'. But it seems that logstash is not communicating with ES! And everything is running on the same machine.

These are the versions of ES and LS I have installed:

elasticsearch-1.5.2-1.noarch
logstash-1.5.1-1.noarch
logstash-forwarder-0.4.0-1.x86_64

And the way they have it setup in the tutorial I followed, you have 3 config files going into the logstash conf.d directory.

In /etc/logstash/conf.d/01-lumberjack-input.conf I have:

  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

In /etc/logstash/conf.d/10-syslog.conf I have:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

I also have my own config from my previous logstash server that I put into a file called 20-logstash.conf that is listening on port 2541:

I have the following in /etc/logstash/conf.d/20-logstash.conf

input {


   lumberjack {
       # The port to listen on
       port => 2541

       # The paths to your ssl cert and key
        ssl_certificate => "/etc/pki/tls/certs/lumberjack.crt"
        ssl_key => "/etc/pki/tls/private/lumberjack.key"

         # Set this to whatever you want.
         type => "logstash"
         codec => "json"
       }
}


filter {
   if [type] == "postfix" {
      grok {
            match => [ "message", "%{SYSLOGBASE}", "timestamp", "MMM dd HH:mm:ss" ]
            add_tag => [ "postfix", "grokked" ]
      }
   }
}


filter {
   if [type] == "system" {
      grok {
            match => [ "message", "%{SYSLOGBASE}" ]
            add_tag => [ "system", "grokked" ]
      }
   }
}

filter {
   if [type] == "syslog" {
      grok {
            match => [ "message", "%{SYSLOGBASE}" ]
            add_tag => [ "syslog", "grokked" ]
      }
   }
}


filter {
   if [type] == "security" {
      grok {
            match => [ "message", "%{SYSLOGBASE}" ]
            add_tag => [ "security", "grokked" ]
      }
   }
}



output {

  stdout {
           #debug => true
           #debug_format => "json"
    }

  elasticsearch {
    host => "logs.mydomain.com"
  }
}

And in /etc/logstash/conf.d/30-lumberjack-output.conf I have ouput going to ES:

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

And now, after restarting logstash again, I see that logstash is listening on the ports that I specify in the config:

[root@logs:/etc/logstash] #lsof -i :5000
COMMAND   PID     USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
java    23893 logstash   16u  IPv6 11665234      0t0  TCP *:commplex-main (LISTEN)
[root@logs:/etc/logstash] #lsof -i :2541
COMMAND   PID     USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
java    23893 logstash   18u  IPv6 11665237      0t0  TCP *:lonworks2 (LISTEN)

As of now logstash is running and not producing any log output:

#ps -ef | grep logstash | grep -v grep
logstash 23893     1 16 11:49 ?        00:01:45 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/var/lib/logstash -Xmx500m -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/var/lib/logstash -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

ls -lh /var/log/logstash/logstash.log
-rw-r--r--. 1 logstash logstash 0 Jun 22 11:49 /var/log/logstash/logstash.log

But still there aren't any indexes created in elasticsearch:

#curl http://localhost:9200/_cat/indices
yellow open .kibana  1 1 1 0 2.4kb 2.4kb
yellow open security 5 1 0 0  575b  575b

And when I go to configure Kibana it says that it can't find any patterns to search on using "logstash-*".

Where can I go from here to get this to work? The configs themselves are unchanged from what I've shown you before.

I haven't tried pointing any logstash forwarders to it out.. but I tried writing stdin to the elasticsearch cluster with this command:

logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

And I got this error back:

`Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}`

Any thoughts on what could be happening?

user99201
  • 287
  • 2
  • 8
  • 22
  • Have you pointed any Logstash-Forwarders at Logstash? – GregL Jun 23 '15 at 11:45
  • No not yet.. but I tried writing stdin to the elasticsearch cluster with this command: – user99201 Jun 23 '15 at 19:34
  • No not yet.. but I tried writing stdin to the elasticsearch cluster and got an error. Please see my last edit to my OP as I had some log output that wouldn't fit into a comment. Thanks! – user99201 Jun 23 '15 at 19:39
  • It looks as thought your cluster isn't green and you have some un-recovered shards. I'd suggest installing the Kopf or Marvel plugins to fix the cluster state and then try again. – GregL Jun 23 '15 at 22:54

0 Answers0