I use locust on several machines (https://locust.io/). Each --slave and --master node with the --logfile option writes a log to its directory. Is it possible to make them write a common log to a single file? Since it is very inconvenient to collect and analyze logs from all machines every time.
Asked
Active
Viewed 200 times
-1
-
I am not familiar with locust, but I do use `loguru` https://loguru.readthedocs.io/en/stable/ on clusters to write logs from different workers to the same file using its `enqueue` parameter. – dzang Feb 26 '20 at 12:59
1 Answers
0
I don't know exactly what it is that you're analyzing from the logs (so this might or might not be the answer you're looking for), but you can use Locust's --csv
command line option (and possibly also --csv-full-history
), to have the master node continuously write the aggregated request statistics and failures in CSV format to files.

heyman
- 4,845
- 3
- 26
- 19
-
I register all unsuccessful responses from the server, and it would be nice to see in one file which error the server returned, to which request and on which machine. – Andrey Mostopalov Feb 26 '20 at 13:30
-
Then I'd recommend that you use some third party logging server or database to report and store that centrally. Or - if the only issue with your current solution is that you manually have to fetch all log files - write a script that downloads and merges all the log files. – heyman Feb 26 '20 at 14:02