3

[[ Problem ]]

I use the psutil library to set cpu affinity of subprocess, and run with mpirun and job scheduler.

After I delete the job from job scheduler, I ssh into the node.

When I use ps aux to check, the python main process and the 11 python subprocesses still continue to run and update the log.

Only the mpirun process get killed.

The psutil library does not change the pid upon setting cpu affinity of a python subprocess.

Without using mpirun to run it, the job scheduler can kill the python subprocesses without problem.

[[ Question ]]

How to make the processes actually get killed when the job scheduler deletes the job?

Thanks.

[[ Test code ]]

import logging
import psutil
import time
import multiprocessing as mp
def main():
    logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO)
    # set_process_affinity(0)
    num_cpu = mp.cpu_count()
    logging.info('call worker')

    # Starts num_cpu - 1 subprocesses
    # For a node with 12 cpu, this makes 11 processes.

    mp_pool = mp.Pool(processes = num_cpu - 1)
    result_list = [ mp_pool.apply_async(worker,(i,)) for i in range(1,num_cpu)]
    mp_pool.close()
    mp_pool.join()

def set_process_affinity(cpu_id):
    psutil_proc = psutil.Process()
    logging.info("cpu_id #%d :: proc_info_before %s" % (cpu_id, psutil_proc))
    psutil_proc.cpu_affinity([cpu_id])
    psutil_proc = psutil.Process()
    logging.info("cpu_id #%d :: proc_info_after %s" % (cpu_id, psutil_proc))

def worker(worker_id):
    cpu_id = worker_id
    set_process_affinity(cpu_id)
    logging.basicConfig(format='%(asctime)s %(message)s', level=logging.INFO)
    for cycle in range(10000):
        logging.info("worker #%d :: cycle %d" % (worker_id, cycle))
        waste_time = []
        for i in xrange(1000000):
            waste_time += [i]
        #time.sleep(10)

if __name__ == '__main__':
    main()

[[ Command in job script ]]

with mpirun (openmpi-1.8.1):

mpirun -np 1 --map-by node python2.7 -u ps_test.py &> ps_test.log

without mpirun:

python2.7 -u ps_test.py &> ps_test.log

[[ Check with ssh into node, then run ps aux | grep "rxu" ]]

with mpirun:

= Before killing job with task scheduler (with mpirun) =

rxu       3686  0.3  0.0 105856  4396 ?        Sl   11:15   0:00 mpirun -np 1 --map-by node python2.7 -u ps_test.py
rxu       3688  0.6  0.1 172408 11660 ?        Sl   11:15   0:00 python2.7 -u ps_test.py
rxu       3689 96.0  0.4 206692 40656 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3690 96.0  0.4 206692 40660 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3691 96.0  0.4 206692 40676 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3692 96.0  0.4 206692 40680 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3693 96.0  0.4 206696 40688 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3694  102  0.4 206696 40684 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3695  102  0.4 203328 40684 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3696  102  0.4 203328 40668 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3697  102  0.4 203328 40668 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3698  101  0.4 203328 40668 ?        R    11:15   0:15 python2.7 -u ps_test.py
rxu       3699  102  0.4 203332 40672 ?        R    11:15   0:15 python2.7 -u ps_test.py
... some processes from pts/1 including ssh into the node

= After killing job with task scheduler (with mpirun) =

The mpirun process get killed.
The python main process (the one with 0.0 cpu used) lives
The 11 python subprocess lives (none got killed).
The machine has 12 cpu.

rxu       3688  0.0  0.1 172408 11660 ?        Sl   11:15   0:00 python2.7 -u ps_test.py
rxu       3689 99.6  0.4 206692 40708 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3690 99.6  0.4 206692 40732 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3691 99.6  0.4 206692 40724 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3692 99.6  0.4 206692 40728 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3693 99.6  0.4 206696 40736 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3694  100  0.4 206696 40732 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3695  100  0.4 203328 40732 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3696 99.9  0.4 203328 40720 ?        R    11:15   3:33 python2.7 -u ps_test.py
rxu       3697  100  0.4 203328 40720 ?        R    11:15   3:34 python2.7 -u ps_test.py
rxu       3698 99.9  0.4 203328 40720 ?        R    11:15   3:33 python2.7 -u ps_test.py
rxu       3699  100  0.4 203332 40724 ?        R    11:15   3:34 python2.7 -u ps_test.py

= Log I get (with mpirun) =

(The pid didn't change upon setting cpu affinity of the subprocess)

2016-06-09 11:15:45,280 call worker
2016-06-09 11:15:45,333 cpu_id #1 :: proc_info_before psutil.Process(pid=3689, name='python2.7')
2016-06-09 11:15:45,334 cpu_id #1 :: proc_info_after psutil.Process(pid=3689, name='python2.7')
2016-06-09 11:15:45,335 worker #1 :: cycle 0
2016-06-09 11:15:45,335 cpu_id #2 :: proc_info_before psutil.Process(pid=3690, name='python2.7')
2016-06-09 11:15:45,336 cpu_id #2 :: proc_info_after psutil.Process(pid=3690, name='python2.7')

without mpirun:

= Before killing job with task scheduler (without mpirun) =

main process is the one with 0.3 %cpu used
11 works are those with 101 %cpu used.
There are 12 cpu on the machine.

rxu       3399  0.3  0.1 237940 11660 ?        Sl   11:06   0:00 python2.7 -u ps_test.py
rxu       3400  101  0.4 206688 40648 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3401  101  0.4 206688 40652 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3402  101  0.4 206688 40672 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3403  101  0.4 206688 40676 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3404  101  0.4 206692 40684 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3405  101  0.4 206692 40680 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3406  101  0.4 203324 40680 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3407  101  0.4 203324 40664 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3408  101  0.4 203324 40664 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3409  101  0.4 203324 40664 ?        R    11:06   0:35 python2.7 -u ps_test.py
rxu       3410  101  0.4 203328 40668 ?        R    11:06   0:35 python2.7 -u ps_test.py

... some processes from pts/1 including ssh into the node

= After killing jobs with task scheduler (without mpirun)=

nothng. no python processes
... except some processes from pts/1 including ssh into the node
rxu
  • 1,369
  • 1
  • 11
  • 29
  • One idea: In the worker, you can check if [the parent thread is alive.](http://stackoverflow.com/questions/23442651/check-if-the-main-thread-is-still-alive-from-another-thread) – Cyrbil Jun 09 '16 at 15:06
  • Thanks for the suggestion. I checked by the following way. the parent process is alive because the main process doesn't use much cpu. From ps aux | grep "rxu", the main process with very low cpu usage lives on – rxu Jun 09 '16 at 15:47
  • 1
    To see parent processes easily: `ps faux` – Cyrbil Jun 09 '16 at 15:53
  • running "killall -9 python2.7" on each of all the nodes after a test run with mpirun is just crazy. – rxu Jun 09 '16 at 16:36

0 Answers0