1

The plugin works fine with low volume topics, but for the high volume its having trouble giving error:

  unexpected error error="The value for message_count is too large. You passed 1001 in the request, but the maximum value is 1000.

I changed the config set the following params: buffer_chunk_limit 2k flush_interval 1

Even though the buffer chunk limit is 2k, I still see the buffered files having much larger sizes, as if the chunk limit is not having any impact.

I now get the following errors for high volume topics:

  2017-02-04 21:38:08 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-02-04 21:38:03 +0000 error_class="Gcloud::Pubsub::ApiError" error="The value for message_count is too large. You passed 6915 in the request, but the maximum value is 1000." plugin_id="object:3fbb05554ad8"
  2017-02-04 21:38:08 +0000 [warn]: suppressed same stacktrace
  2017-02-04 21:38:09 +0000 [warn]: Size of the emitted data exceeds buffer_chunk_limit.
  2017-02-04 21:38:09 +0000 [warn]: This may occur problems in the output plugins ``at this server.``
  2017-02-04 21:38:09 +0000 [warn]: To avoid problems, set a smaller number to the buffer_chunk_limit
  2017-02-04 21:38:09 +0000 [warn]: in the forward output ``at the log forwarding server.``

In the output forwarder document I don't see any reference to chunk limit param. And I am not sure how I can enforce smaller chunks to be set to pubsub.

I looked for a way of increasing that limit on the pubsub, but can't find anything on the pubsub docs talking about the 1000 limit that the fluentd error shows when publishing the logs to pubsub.

Any help is appreciated.

FZF
  • 855
  • 4
  • 12
  • 29

1 Answers1

1

If you are using fluent-plugin-gcloud-pubsub, This problem can be solved by using this plug-in.

https://github.com/mia-0032/fluent-plugin-gcloud-pubsub-custom

This plugin is continuously maintained. And grpc/grpc#7804 has also been fixed so I recommend it.

Daichi Hirata
  • 66
  • 1
  • 4