0

I want to know if there's a way to run some code on the child process when the parent process tries to terminate the child process. Is there a way we can write an Exception maybe?

My code looks something like this:

main_process.py

import Process from multiprocessing

def main():
    p1 = Process(target = child, args = (arg1, ))
    p1.start()
    p1.daemon = True
    #blah blah blah code here
    sleep(5)
    p1.terminate()

def child(arg1):
    #blah blah blah
    itemToSend = {}
    #more blah blah
    snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
    try:
        snmpEngine.transportDispatcher.runDispatcher()
    except:
        snmpEngine.transportDispatcher.closeDispatcher()
        raise

Since the job never finishes, child process keeps running. I have to terminate it from the parent process since child process never terminates on its own. However, I want to send itemToSend to parent process before child process terminates. Can I return it to parent process somehow?

UPDATE: Let me explain how runDispatcher() of pysnmp module works

def runDispatcher():
    while jobsArePending():  # jobs are always pending because of jobStarted() function
        loop()

def jobStarted(jobId):
    if jobId in jobs:        #This way there's always 1 job remaining
        jobs[jobId] = jobs[jobId] + 1

This is very frustrating. Instead of doing all this, is it possible to write an snmp trap listener on our own? Can you point me to the right resources?

user2511458
  • 75
  • 3
  • 10
  • @Alp I'm afraid the answer to both is "yes". The `snmpEngine.transportDispatcher.runDispatcher()` function runs indefinitely. It never stops. The only way to stop it is to terminate the entire process. But i need the child process to send `itemToSend` which is calculated when `runDispatcher()` is running, back to the parent process. – user2511458 Mar 13 '14 at 06:38
  • @user2511458: run `runDispatcher` in a background daemon thread in the child process, [use the main thread to run `child()` function as in my answer](http://stackoverflow.com/a/22365109/4279) to wait for `stopped_event` and to send `itemToSend` at the end. – jfs Mar 13 '14 at 06:53
  • @J.F. Sebastian Sorry, I forgot to mention that my child process was a daemon. And daemonic processes are not allowed to have children – user2511458 Mar 13 '14 at 10:36
  • @user2511458: [the child process may have multiple *threads*](https://gist.github.com/zed/9526217). – jfs Mar 13 '14 at 10:54

2 Answers2

2

The .runDispatcher() method actually invokes a mainloop of an asynchronous I/O engine (asyncore/twisted) which terminates as soon as no active pysnmp 'jobs' are pending.

You can make pysnmp dispatcher to cooperate with the rest of your app by registering your own callback timer function which will be invoked periodically from mainloop. In your callback function you could check if a termination event arrived and reset pysnmp 'job' what would make pysnmp mainloop to complete.

def timerCb(timeNow):
    if terminationRequestedFlag:  # this flag is raised by an event from parent process
        # use the same jobId as in jobStarted()
        snmpEngine.transportDispatcher.jobFinished(1)  

snmpEngine.transportDispatcher.registerTimerCbFun(timerCb)

Those pysnmp jobs are just flags (like '1' in your code) that mean to tell I/O core that asynchronous applications still need this I/O core to run and serve them. Once the last of potentially many apps is no more interested in I/O core operation, the mainloop terminates.

Ilya Etingof
  • 5,440
  • 1
  • 17
  • 21
0

If the child process may cooperate then you could use multiprocessing.Event to inform the child that it should exit and multiprocessing.Pipe could be used to send itemToSend to the parent:

#!/usr/bin/env python
import logging
import multiprocessing as mp
from threading import Timer

def child(stopped_event, conn):
    while not stopped_event.wait(1):
        pass
    mp.get_logger().info("sending")
    conn.send({'tosend': 'from child'})
    conn.close()

def terminate(process, stopped_event, conn):
    stopped_event.set() # nudge child process
    Timer(5, do_terminate, [process]).start()
    try:
        print(conn.recv())  # get value from the child
        mp.get_logger().info("received")
    except EOFError:
        mp.get_logger().info("eof")

def do_terminate(process):
    if process.is_alive():
        mp.get_logger().info("terminating")
        process.terminate()

if __name__ == "__main__":
    mp.log_to_stderr().setLevel(logging.DEBUG)
    parent_conn, child_conn = mp.Pipe(duplex=False)
    event = mp.Event()
    p = mp.Process(target=child, args=[event, child_conn])
    p.start()
    child_conn.close() # child must be the only one with it opened
    Timer(3, terminate, [p, event, parent_conn]).start()

Output

[DEBUG/MainProcess] created semlock with handle 139845842845696
[DEBUG/MainProcess] created semlock with handle 139845842841600
[DEBUG/MainProcess] created semlock with handle 139845842837504
[DEBUG/MainProcess] created semlock with handle 139845842833408
[DEBUG/MainProcess] created semlock with handle 139845842829312
[INFO/Process-1] child process calling self.run()
[INFO/Process-1] sending
{'tosend': 'from child'}
[INFO/Process-1] process shutting down
[DEBUG/Process-1] running all "atexit" finalizers with priority >= 0
[DEBUG/Process-1] running the remaining "atexit" finalizers
[INFO/MainProcess] received
[INFO/Process-1] process exiting with exitcode 0
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • Thank you for your reply. This solution would work beautifully if it weren't for the `runDispatcher()` function. The `while` loop in child process never gets executed because `runDispatcher()` doesn't exit without terminating the process. – user2511458 Mar 13 '14 at 06:40