13

I have a Logstash instance running as a service that reads from Redis and outputs to Elasticsearch. I just noticed there was nothing new in Elasticsearch for the last few days, but the Redis lists were increasing.

Logstash log was filled with 2 errors repeated for thousands of lines:

:message=>"Got error to send bulk of actions"
:message=>"Failed to flush outgoing items"

The reason being:

{"error":"IllegalArgumentException[Malformed action/metadata line [107], expected a simple value for field [_type] but found [START_ARRAY]]","status":500}, 

Additionally, trying to stop the service failed repeatedly, I had to kill it. Restarting it emptied the Redis lists and imported everything to Elasticsearch. It seems to work ok now.

But I have no idea how to prevent that from happening again. The mentioned type field is set as a string for each input directive, so I don't understand how it could have become an array.
What am I missing?

I'm using Elasticsearch 1.7.1 and Logstash 1.5.3. The logstash.conf file looks like this:

input {
  redis {
    host => "127.0.0.1"
    port => 6381
    data_type => "list"
    key => "b2c-web"
    type => "b2c-web"
    codec => "json"
  }
  redis {
    host => "127.0.0.1"
    port => 6381
    data_type => "list"
    key => "b2c-web-staging"
    type => "b2c-web-staging"
    codec => "json"
  }

    /* other redis inputs, only key/type variations */
}
filter {
  grok {
    match => ["msg", "Cache hit %{WORD:query} in %{NUMBER:hit_total:int}ms. Network: %{NUMBER:hit_network:int} ms.     Deserialization %{NUMBER:hit_deserial:int}"]
    add_tag => ["cache_hit"]
    tag_on_failure => []
  }
  /* other groks, not related to type field */
}
output {
  elasticsearch {
    host => "[IP]"
    port => "9200"
    protocol=> "http"
    cluster => "logstash-prod-2"
  }
}
Kobayashi
  • 2,402
  • 3
  • 16
  • 25
Antoine
  • 5,055
  • 11
  • 54
  • 82
  • Could you provide your config or at least an excerpt where you set the type field? It seems to be related to the elasticsearch bulk api: https://github.com/elastic/elasticsearch/issues/11458 – hurb Sep 02 '15 at 15:27
  • Indeed that issue looks similar, but with array issue instead of null. – Antoine Sep 02 '15 at 16:27
  • Can you clear logstash.log file, update your elasticsearch-logstash connection plugin and restart it. I had a similar issue and in my case my elasticsearch went down first in b/w and after restarting i had some plugin issue and connection issues. – Aditya Patel Sep 08 '15 at 05:20
  • Was this ever solved? dealing with the same thing right now. – Nate Oct 07 '15 at 15:31
  • @Nate this sounds like something wrong was sent via the bulk request. What's the error in your case? The same? About `_type`? Are you using Logstash? – Andrei Stefan Oct 07 '15 at 15:44
  • Same error about `_type`, and yes I'm using Logstash. – Nate Oct 07 '15 at 17:17
  • Can you identify the document that was attempted to be indexed and failed with that error? Or the bulk request? – Andrei Stefan Oct 07 '15 at 21:07
  • @Antoine can you run your logstash process with `--debug` so we can see what's in the bulk payload, since the error is about `_type` being wrong (i.e. this is the `_type` field in the bulk command line)? – Val Oct 08 '15 at 03:26
  • @Antoine, you might also want to add `action.bulk: TRACE` into your Elasticsearch `config/logging.yml` file so you can see how it looks like from the ES side. – Val Oct 08 '15 at 03:37
  • @Antoine, did you try what I suggested above so we get more insights into what's going on? – Val Oct 10 '15 at 03:04
  • @Val, I'm sorry but I haven't tried your suggestions. Unfortunately I don't have the time to investigate this right now. And the issue has not re-occurred yet. – Antoine Oct 12 '15 at 06:58
  • @Nate what about you? did you try since you opened the bounty? – Val Oct 12 '15 at 09:29
  • @Val the error occurred when I would shut down our logstash indexer before our logstash shippers, and only occasionally. To mitigate this, I created a script which safely shuts down all shippers before the indexer, and awakens the indexer before all shippers. – Nate Oct 13 '15 at 13:27

1 Answers1

1

According to your log message:

{"error":"IllegalArgumentException[Malformed action/metadata line [107], expected a simple value for field [_type] but found [START_ARRAY]]","status":500},

It seems you're trying to index a document with a type field that's an array instead of a string.

I can't help you without more of the logstash.conf file. But check followings to make sure:

  1. When you use add_field for changing the type you actually turn type into an array with multiple values, which is what Elasticsearch is complaining about.

  2. You can use mutate join to convert arrays to strings: api link

    filter {
        mutate {
            join => { "fieldname" => "," }
        }
    }
    
Dulguun
  • 554
  • 5
  • 11
  • I had forgotten to update this question when I found out, but indeed the mutate filter fixed the issue. – Antoine Dec 16 '16 at 09:55