1

I'm having issues authenticating Logstash with Shield. Logs aren't getting through to Elasticsearch, and I have discovered in the Elasticsearch log files that all the requests are being denied by shield due to incorrect authentication.

The following is my logstash configuration, configured to output logs to localhost:9200 using the http by default and with user credentials with admin rights which was created using the esuser useradd command.

input {
  file {
    path => "/data.csv"
    start_position => "beginning"
  }
}
filter {
  csv {
      separator => ","
      columns => ["Date","Open","High","Low","Close","Volume","Adj Close"]
  }
  mutate {convert => ["High", "float"]}
  mutate {convert => ["Open", "float"]}
  mutate {convert => ["Low", "float"]}
  mutate {convert => ["Close", "float"]}
  mutate {convert => ["Volume", "float"]}
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    user     => "test"
    password => "password"
  }
  stdout {
        codec => rubydebug
  }
}

After restarting the elasticsearch and logstash services I can have a look at the logs:

logstash.stdout

Sending logstash logs to /var/log/logstash/logstash.log.

logstash.err and logstash.log are both empty.

elasticsearch.log

[2016-03-31 15:47:23,841][INFO ][node                     ] [Talisman] version[2.2.0], pid[2454], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-03-31 15:47:23,841][INFO ][node                     ] [Talisman] initializing ...
[2016-03-31 15:47:24,348][INFO ][plugins                  ] [Talisman] modules [lang-expression, lang-groovy], plugins [license, shield], sites []
[2016-03-31 15:47:24,379][INFO ][env                      ] [Talisman] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [34.6gb], net total_space [39.3gb], spins? [possibly], types [ext4]
[2016-03-31 15:47:24,379][INFO ][env                      ] [Talisman] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-03-31 15:47:24,417][WARN ][threadpool               ] [Talisman] requested thread pool size [100] for [index] is too large; setting to maximum [4] instead
[2016-03-31 15:47:24,631][INFO ][http                     ] [Talisman] Using [org.elasticsearch.http.netty.NettyHttpServerTransport] as http transport, overridden by [shield]
[2016-03-31 15:47:24,822][INFO ][transport                ] [Talisman] Using [org.elasticsearch.shield.transport.ShieldServerTransportService] as transport service, overridden by [shield]
[2016-03-31 15:47:24,823][INFO ][transport                ] [Talisman] Using [org.elasticsearch.shield.transport.netty.ShieldNettyTransport] as transport, overridden by [shield]
[2016-03-31 15:47:27,295][INFO ][node                     ] [Talisman] initialized
[2016-03-31 15:47:27,295][INFO ][node                     ] [Talisman] starting ...
[2016-03-31 15:47:28,949][INFO ][shield.transport         ] [Talisman] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-03-31 15:47:28,972][INFO ][discovery                ] [Talisman] elasticsearch/hUEIDcdWRTu9j3DZYMR8Fw
[2016-03-31 15:47:32,181][INFO ][cluster.service          ] [Talisman] new_master {Talisman}{hUEIDcdWRTu9j3DZYMR8Fw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-03-31 15:47:32,388][INFO ][http                     ] [Talisman] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2016-03-31 15:47:32,389][INFO ][node                     ] [Talisman] started
[2016-03-31 15:47:32,880][INFO ][license.plugin.core      ] [Talisman] license [removedThisJustIncase!] - valid
[2016-03-31 15:47:32,888][ERROR][license.plugin.core      ] [Talisman]
#
# License will expire on [Saturday, April 30, 2016]. If you have a new license, please update it.
# Otherwise, please reach out to your support contact.
#
# Commercial plugins operate with reduced functionality on license expiration:
# - shield
#  - Cluster health, cluster stats and indices stats operations are blocked
#  - All data operations (read and write) continue to work
[2016-03-31 15:47:32,994][INFO ][gateway                  ] [Talisman] recovered [2] indices into cluster_state
[2016-03-31 15:47:34,746][INFO ][rest.suppressed          ] /_bulk Params: {}
ElasticsearchSecurityException[missing authentication token for REST request [/_bulk]]
        at org.elasticsearch.shield.support.Exceptions.authenticationError(Exceptions.java:39)
        at org.elasticsearch.shield.authc.DefaultAuthenticationFailureHandler.missingToken(DefaultAuthenticationFailureHandler.java:65)
        at org.elasticsearch.shield.authc.InternalAuthenticationService.authenticate(InternalAuthenticationService.java:102)
        at org.elasticsearch.shield.rest.ShieldRestFilter.process(ShieldRestFilter.java:71)
        at org.elasticsearch.rest.RestController$ControllerFilterChain.continueProcessing(RestController.java:265)
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:176)
        at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)
        at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)
        at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:363)
        at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:63)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)
        at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:194)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:135)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:452)
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.handler.ipfilter.IpFilteringHandlerImpl.handleUpstream(IpFilteringHandlerImpl.java:154)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2016-03-31 15:47:35,381][INFO ][cluster.routing.allocation] [Talisman] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

This ElasticsearchSecurityException repeats for every record in the file I am trying to get logs from. One thing I notice is that the exception does not mention my user or password at all.

There have been a few other StackOverflow questions like this but their errors are often in the format: AuthenticationException[unable to authenticate user [user] for REST request

I also have nginx and kibana installed.

Help would be appreciated.

x3nr0s
  • 1,946
  • 4
  • 26
  • 46
  • 1
    Unfortunately this question did not get an answer - and I am still not sure what caused these issues. Eventually I updated the versions of both elasticsearch and logstash and my issue fixed itself - whether this was a bug I am not sure. – x3nr0s Apr 05 '16 at 15:28
  • I second this finding, though have no idea about the cause. Upgraded from 2.2 to 2.3.1 - the process was: upgrade logstash, upgrade elasticsearch, remove license and shield plugins, install them again and reboot the server (rebooting was important in our case :) ) – st2rseeker Apr 06 '16 at 09:47
  • @st2rseeker I am new to such authentication mechanism. I use Logstash-5.6.5. Do I need to install any XPack for the Logstash side for such authentication? The Elasticsearch server I output to seems to be somehow provided with an HTTPS url and I am provided with a username/password to connect to it. – Loganathan Mar 05 '18 at 11:47
  • @Xenidious I am new to such authentication mechanism. I use Logstash-5.6.5. Do I need to install any XPack for the Logstash side for such authentication? The Elasticsearch server I output to seems to be somehow provided with an HTTPS url and I am provided with a username/password to connect to it. – Loganathan Mar 05 '18 at 11:48

0 Answers0