I have a simple python server script which forks off multiple instances (say N) of C++ program. The C++ program generates some events that need to be captured.
The events are currently being captured in a log file (1 logfile per forked process). In addition, i need to periodically (T minutes) get the rate at which the events are being produced across all child processes to either the python server or some other program listening for these events (still not sure). Based on rate of these events, some "re-action" may be taken by the server (say reduce the number of forked instances)
Some pointers i have briefly looked at:
- grep log files - go through the running process log files (.running), filter those entries generated in the last T minutes, analyse the data and report
- socket ipc - add code to c++ program to send the events to some server program which analyses the data after T minutes, reports and starts all over again
- redis/memcache (not sure completely) - add code to c++ program to use some distributed store to capture all the generated data, analyses the data after T minutes, reports and starts all over again
Please let me know your suggestions.
Thanks