0

I plan to make use of Filebeat(s) to copy log files from a predetermined directory of each node to a common Elasticsearch database by means of Logstash. For instance, in the source directory there are 10 log files with the same log format and they are growing with high velocity everyday.

Albeit the fact that those 10 log files have the same log format, but they are generated by 10 different programs. Therefore, in the long run it might be wise to build an index per log file, i.e. 10 indices, each of which corresponds to one log file. First, I would like to know is this a suitable plan? Or only one index for all the log data is enough because the index is generated on a daily base?

IF it is wise to build an index per log file, then how to achieve this goal? It seems that the filebeat.yml allows only one index to be defined, thus making use of one Filebeat to generate several indices is impossible. Is there any good solution for that?

Rui
  • 3,454
  • 6
  • 37
  • 70
  • Depends on how large the daily index is..., how many nodes you have in the cluster, how many shards the cluster has and what's the plan for growth. – Andrei Stefan May 01 '16 at 21:11
  • Assume the daily index is super large as I mentioned. I mainly want to know how to create indices for each log file – Rui May 01 '16 at 21:18

1 Answers1

0

Filebeat is only the shipper. By itself it cannot send to multiple Elasticsearch indices. For this you need Logstash.

In Filebeat define multiple different prospectors for the log files and set a different value for a field, so this can be used in Logstash to do the logic.

Then in Logstash depending on that field that can have different values for each prospector, send the event to one index or another.

Andrei Stefan
  • 51,654
  • 6
  • 98
  • 89