16

I run docker container with supervisor like this:

Dockerfile

CMD ["/run.sh"]

run.sh

#!/usr/bin/env bash
exec supervisord -n

supervisor-serf.conf

[group:job]
programs=serf,producer

[program:serf]
command=/start-serf-agent.sh
numprocs=1
autostart=true
autorestart=unexpected
stopasgroup=true
killasgroup=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

start-serf-agent.sh

#!/bin/bash
exec serf agent --join=serf:7946 -tag role=producer

supervisor-servce.conf

[program:producer]
command=/start.sh
numprocs=1
stopasgroup=true
killasgroup=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

start.sh

#!/bin/bash
exec /producer --project=${NAME}

After producer stopped I've got:

producer_1 |     2016/02/29 21:59:50 [INFO] serf: EventMemberLeave: 7c4fbc80af97 172.19.0.2
producer_1 | 2016/02/29 21:59:51 INF    1 stopping
producer_1 | 2016/02/29 21:59:51 INF    1 exiting router
producer_1 | 2016-02-29 21:59:51,281 INFO exited: producer (exit status 0; expected)
producer_1 |     2016/02/29 21:59:51 [INFO] agent: Received event: member-leave

but serf-agent keep container in running state. I want to stop Docker container when producer complete his work properly with status 0. I tried to join processes to one group but seems doesn't work. Guys, what did I skip? Help me please!

Vitaly Velikodny
  • 361
  • 1
  • 2
  • 10
  • possible dupe of https://serverfault.com/questions/735328/shutdown-supervisord-on-subprocess-exit – ibotty Mar 08 '16 at 13:55

5 Answers5

10

I resolved issue with supervisor eventlistener:

[program:worker]
command=/start.sh
priority=2
process_name=worker
numprocs=1
stopasgroup=true
killasgroup=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

[eventlistener:worker_exit]
command=/kill.py
process_name=worker
events=PROCESS_STATE_EXITED

kill.py

#!/usr/bin/env python
import sys
import os
import signal

def write_stdout(s):
   sys.stdout.write(s)
   sys.stdout.flush()
def write_stderr(s):
   sys.stderr.write(s)
   sys.stderr.flush()
def main():
   while 1:
       write_stdout('READY\n')
       line = sys.stdin.readline()
       write_stdout('This line kills supervisor: ' + line);
       try:
               pidfile = open('/var/run/supervisord.pid','r')
               pid = int(pidfile.readline());
               os.kill(pid, signal.SIGQUIT)
       except Exception as e:
               write_stdout('Could not kill supervisor: ' + e.strerror + '\n')
       write_stdout('RESULT 2\nOK')
if __name__ == '__main__':
   main()
   import sys
main issue I forgot to point to **process_name**

Also good article process management in docker containers

Vitaly Velikodny
  • 361
  • 1
  • 2
  • 10
6

Here's a slightly more streamlined version which makes use of a shell script instead of a python script, and also covers multiple services, killing the whole of supervisor if either fails.

supervisord.conf
$ cat /etc/supervisord.conf
[supervisord]
nodaemon=true
loglevel=debug
logfile=/var/log/supervisor/supervisord.log
pidfile=/var/run/supervisord.pid
childlogdir=/var/log/supervisor

[program:service1]
command=/usr/sbin/service1
user=someone
autostart=true
autorestart=true
startsecs=30
process_name=service1

[program:service2]
command=/usr/sbin/service2
user=root
autostart=true
autorestart=true
startsecs=30
process_name=service2

[eventlistener:processes]
command=stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL
stop-supervisor.sh
$ cat stop-supervisor.sh
#!/bin/bash

printf "READY\n";

while read line; do
  echo "Processing Event: $line" >&2;
  kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin

References

slm
  • 7,615
  • 16
  • 56
  • 76
4

For those who don't want a separate file.

[supervisord]
loglevel=warn
nodaemon=true

[program:hi]
command=bash -c "echo waiting 5 seconds . . . && sleep 5"
autorestart=false
numprocs=1
startsecs=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0

[eventlistener:processes]
command=bash -c "printf 'READY\n' && while read line; do kill -SIGQUIT $PPID; done < /dev/stdin"
events=PROCESS_STATE_STOPPED,PROCESS_STATE_EXITED,PROCESS_STATE_FATAL
Clay Risser
  • 141
  • 3
  • I confirm this works. Would you please explain how -- why read is appropriate? FWIW, I chose to add the config line `startretries=1` (default is 3) to the `[program:foo]` entries in my supervisord.conf file because retrying with a bad configuration like an invalid password rarely ends well :) – chrisinmtown May 05 '22 at 12:03
1

Here's a simple solution for Docker. In your supervisord.conf, replace this:

[program:something]
command = something

with this:

[program:something]
command = sh -c 'something && kill 1'
0

If we want to stop supervisor when some specific services crash but maintain it running in other cases, we can use something like this:

[program:primary_required]
command=sh -c '/path/to/app start || supervisorctl shutdown'

[program:secondary_required_fire_and_forget]
command=sh -c '/path/to/app migrate || supervisorctl shutdown'
exitcodes=0

[program:non_required_service]
command=service non_required start

Using this approach we can use supervisor as Docker entrypoint and raise error on docker start when something required fails, this is useful if we are using services like Elastic Beanstalk, Kubernetes, etc that depends on container status to know if everything is OK.

This also ensure that all services already started will do a gracefull shutdown instead of a kill and without any orphaned process.

Maybe a non intentional behaviour of this approach is that when a service fails it shutdown the supervisor before allow a retry, so retries does not work. In my case this is not a problem.