0

What do you think folks, about placing Redis in each Logstash shipper node to ensure a guaranteed logs delivery?

Ben
  • 69
  • 1
  • 5
  • 1
    Can you describe your question more clearly? So far the logstash recommend that the redis can put at indexer node. – Ban-Chuan Lim Aug 29 '14 at 02:34
  • Logstash recommend redis at indexer node for load balancing. – Ben Aug 30 '14 at 12:48
  • My concern is about reliable shipping, I don't want to use a classic RELP/rsyslog. Any suggestion to ensure no logs loss? – Ben Aug 30 '14 at 12:55

2 Answers2

0

If you are using logstash or logstash-forwarder as your shipper into a centralized logstash, you typically don't need a broker like redis. The shippers will detect when logstash is unable to accept more events. It will maintain a pointer to the current log location, and continue when the bottleneck is removed.

If you have logs that don't buffer (syslog, snmptrap, etc), then a broker makes sense.

Alain Collins
  • 16,268
  • 2
  • 32
  • 55
0

There are no guarantees form the point of ingestion (generating downstream event) to the point of storing in Elasticsearch or Hadoop. This is a distributed system with many points of failure. Unfortunately it is up to you to handle reliability and recovery. There is a CR pending with logstash to improve current situation.

https://github.com/elastic/logstash/issues/2609

Nonetheless, logstash acts as originator or aggregator of events for the downstream system. You need to solve reliability based on your system design and SLA. If you were using a full stack product like Splunk, the issue would be handled by the vendor.(at least i was told but yet to test for myself ;))

YaRiK
  • 698
  • 6
  • 13