4

I have an application (let's call it Master) which runs on linux and starts several processes (let's call them Workers) using fork/exec. Therefore each Worker has its own PID and writes its own logs. When running directly on a host machine (without docker) each process uses syslog for logging, and rsyslog puts ouptut from each Worker to a separate file, using a config like this:

$template workerfile,"/var/log/%programname%.log"
:programname, startswith, "worker" ?workerfile
:programname, isequal, "master" "/var/log/master"

Now, I want to run my application inside a docker container. Docker starts Master process as the main process (in CMD section of the Dockerfile), and then it forks the Workers at runtime (not sure if it's a canonical way to use docker, but that's what I have). Of course I'm getting only the stdout for the Master process from docker, and logs of Workers get lost.

So my question is, any way I could get the logs from the forked processes? To be precise, I want the logs from different processes to appear in individual files on the host machine eventually.

I tried to run rsyslog daemon inside docker container (just like I do when running without docker), writing logs to a mounted volume, but it doesn't seem to work. I guess it requires a workaround like supervisord to run the Master process and rsyslogd at the same time, which looks like an overkill to me.

I couldn't find any simple solution for that, though my problem seems to be trivial.

Any help is appreciated, thanks

Michael D.
  • 83
  • 7

0 Answers0