I have ELK setup as belows
Kibana <-- ElasticSearch <-- Logstash <-- FileBeat (fetching logs from different log sources)
This setup breaks down when message inflow is more. As much I have read on internet folks have recommended to use redis in this setup to make breathing space for ES to consume message. So I now wish to setup something like this
Kibana <-- ElasticSearch <-- Logstash <-- REDIS <-- FileBeat (fetching logs from different log sources)
I want Redis to act as intermediate to hold messages so that consumer end does not gets bottleneck. But here the redis dump.rdb keeps on growing and once messages are consumed by logstash it is not getting shrinked back (not freeing up space).
Below is my redis.conf
bind host
port port
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no
supervised no
pidfile /var/run/redis.pid
loglevel notice
logfile "/tmp/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
EDIT: FileBeat config:
filebeat:
prospectors:
-
paths:
- logPath
input_type: log
tail_files: true
output:
redis:
host: "host"
port: port
save_topology: true
index: "filebeat"
db: 0
db_topology: 1
timeout: 5
reconnect_interval: 1
shipper:
logging:
to_files: true
files:
path: /tmp
name: mybeat.log
rotateeverybytes: 10485760
level: warning
Logstash Config:
input {
redis {
host => "host"
port => "port"
type => "redis-input"
data_type => "list"
key => "filebeat"
}
}
output {
elasticsearch {
hosts => ["hosts"]
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Let me know if more info is needed. TIA!!!