3

I have ELK setup as belows
Kibana <-- ElasticSearch <-- Logstash <-- FileBeat (fetching logs from different log sources)
This setup breaks down when message inflow is more. As much I have read on internet folks have recommended to use redis in this setup to make breathing space for ES to consume message. So I now wish to setup something like this
Kibana <-- ElasticSearch <-- Logstash <-- REDIS <-- FileBeat (fetching logs from different log sources)
I want Redis to act as intermediate to hold messages so that consumer end does not gets bottleneck. But here the redis dump.rdb keeps on growing and once messages are consumed by logstash it is not getting shrinked back (not freeing up space). Below is my redis.conf

bind host
port port
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no
supervised no
pidfile /var/run/redis.pid
loglevel notice
logfile "/tmp/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

EDIT: FileBeat config:

filebeat:
  prospectors:
    -
      paths:
        - logPath
      input_type: log
      tail_files: true
output:
   redis:
     host: "host"
     port: port
     save_topology: true
     index: "filebeat"
     db: 0
     db_topology: 1
     timeout: 5
     reconnect_interval: 1
shipper:
logging:
  to_files: true
  files:
    path: /tmp
    name: mybeat.log
    rotateeverybytes: 10485760
  level: warning

Logstash Config:

input {
  redis {
    host => "host"
    port => "port"
    type => "redis-input"
    data_type => "list"
    key => "filebeat"
  }
}
output {
  elasticsearch {
    hosts => ["hosts"]
    manage_template => false
    index => "filebeat-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Let me know if more info is needed. TIA!!!

Mrunal Gosar
  • 4,595
  • 13
  • 48
  • 71

1 Answers1

0

I think your problem might be with the way how the messages are getting store and retrieved from Redis.

Ideally you should use List data structure of Redis, use LPUSH and LPOP to insert and retrieve messages respectively.

Bhushan
  • 1,489
  • 3
  • 27
  • 45
  • I've added my filebeat and logstash config. Please see. I am using data structure as List. Not sure where should i specify LPUSH and LPOP. I am new to redis – Mrunal Gosar Jun 17 '16 at 07:07
  • you have used 'datatype' as 'list' in logstash config so it should remove the entry from list after consuming. Do check your configuration again. – Bhushan Jun 17 '16 at 07:47
  • so whatever i've used is correct right? But still I don't see dump.rdb size getting shrinked..its continously getting increased. – Mrunal Gosar Jun 17 '16 at 08:16
  • Okay, lets debug it then. Redis is installed on which OS ? check the size of list 'filebeat' of redis by command "LLEN filebeat" – Bhushan Jun 17 '16 at 08:52
  • Ok..I think I got it..Not sure though if thats the issue..But i'll give some time for it to run and see...also its installed on *nix system and i did try LLEN filebeat command on redis-cli..it increases and decreases but size of dump.rdb was not getting reduced – Mrunal Gosar Jun 17 '16 at 09:08
  • Ok..I am still missing something here as a whole..it seems that redis piles up messages in itself and then starts sending it to logstash..so in kibana i am not getting live messages..any suggestions in this regards? – Mrunal Gosar Jun 23 '16 at 09:12