I plan to make use of Filebeat(s) to copy log files from a predetermined directory of each node to a common Elasticsearch database by means of Logstash. For instance, in the source directory there are 10 log files with the same log format and they are growing with high velocity everyday.
Albeit the fact that those 10 log files have the same log format, but they are generated by 10 different programs. Therefore, in the long run it might be wise to build an index per log file, i.e. 10 indices, each of which corresponds to one log file. First, I would like to know is this a suitable plan? Or only one index for all the log data is enough because the index is generated on a daily base?
IF it is wise to build an index per log file, then how to achieve this goal? It seems that the filebeat.yml allows only one index to be defined, thus making use of one Filebeat to generate several indices is impossible. Is there any good solution for that?