0

Is there a way to raise an exception in the child process when the main process gets an KeyboardInterrupt exception (instead of a loop polling for an event or queue value)?

For now I am using a Queue to communicate the KeyboardInterrupt triggered in the main process to the child processes. For the while part it gets noticed in the child process loop and until now I can do a good clean up for the child process.

However, when KeyboardInterrupt gets triggered during child's initialization I have to check after every statement if the user has aborted the main process. Another option would be to trigger an exception by freeing the connection resource - that will be called later - to trigger a (general or connection-related) exception.

Are there better ways for a good clean up (daemon processes will not yield a good clean up I think)?

def connect(self):
  self.conn = mysql.connector.connect(
    host="192.168.10.10",
    user="homestead",
    password="xxxx",
    database="xxxx"
  )
  self.cursor = self.conn.cursor() 

def dispose(self):
  self.cursor.close()
  self.conn.close()


def init(self):
  # set up root logger
  # ...
  root_logger = logging.getLogger()
  root_logger.addHandler(fh)
  # ...
  try: # init check 1 for KeyboardInterrupt exception in 
# main process (*1)
    row = self.task_queue.get(timeout=5)  # or something like using an 
# Event.is_set() whenever KeyboardInterrupt is raised 
# in main process could be possible too
    if row is None:  # None is sent when KeyboardInterrupt exception 
# in main process
      self.task_queue.task_done()
      return false
      # for example, calling self.dispose() here generates an exception at 
# self.connect() because connection gets closed / freed (*2)
      # or raise CustomException (*2b)?
  except: 
    pass  
  
  # ...
  self.connect()
  # ...
  try: # init check 2 for KeyboardInterrupt exception in main process (*3)
    row = self.task_queue.get(timeout=5)
    if row is None:
      self.task_queue.task_done()       
      self.dispose()
      return false
      # raise CustomException?
  except: 
    pass 

  return true

  def run(self):

    try:
      self.init()
    except KeyboardInterrupt: # would something like this be possible
# (or disrupt the code flow to elicit another exception like in *2
# , or raise a CustomException in *2b, both which get caught here, as 
# an alternative)?
      ...
      # this would be handy instead of checking after each statement 
# in the init parts (*1, *3)
    except:
      logging.error("Something went wrong during initialization") 
      self.task_queue.task_done()         
      self.dispose()
      return      

    while True:

      if self.conn.is_connected() == False:
        # ....

      row = None
      empty = False
      leave = False
      
      try:
        row = self.task_queue.get(timeout=5)
        if row is None:
          self.task_queue.task_done()    
          leave = True
        else:
          # save item
      except: 
        empty = True
        pass

      if (leave == True): 
        break 
    self.dispose()

BTW: I have read some other topics like Python: while KeyboardInterrupt is forwarded to multiprocessing child process? and How to use KeyboardInterrupt from the main process to stop child processes?

Edit (added from main()):

def manage_ctrlC(*args):
    sqlDataSaver.exit.set()

def main():

  global tasks, sqlDataSaver

  # Manage Ctrl_C keyboard event 
  signal.signal(signal.SIGINT, manage_ctrlC)  # dummy, not used yet

  # ...

  tasks = multiprocessing.JoinableQueue() 
  sqlDataSaver = sqlExecutor(tasks) # inherits from multiprocessing.Process
  sqlDataSaver.start() 

@Tim Roberts:

You mean something like this? So each process has its own sigint handler and a separate cleanup process that is triggered by the exception that is raised in each handler?

from multiprocessing import *
import signal
import time
import sys

class SigInt(Exception):
    """SIG INT"""
    pass

class MyProcess(Process):
    def __init__(self, toExecute, sighandler):

        Process.__init__(self)

        self.toExecute = toExecute
        self.sighandler = sighandler

    def run(self):
        # set up custom handler
        signal.signal(signal.SIGINT,  self.sighandler)
        while True:
            try:
                self.toExecute()
            except SigInt:
                # clean up
                break
        print(current_process().name," process exited")

def manage_ctrlC_main(*args):
    print('main crtl-c')
    sys.exit()

def toExecute1():
 
    time.sleep(1)
    print("exec1");

def toExecute2():
 
    time.sleep(1)
    print("exec2");

def sigh1(signal, frame):
    print('sig 1 handler')
    raise SigInt

def sigh2(signal, frame):
    print('sig 2 handler')
    raise SigInt

def main():
    global myProcess1, myProcess2

    signal.signal(signal.SIGINT, manage_ctrlC_main)

    myProcess1 = MyProcess(toExecute1,sigh1)
    myProcess1.start()

    time.sleep(4)
    
    myProcess2 = MyProcess(toExecute2,sigh2)
    myProcess2.start()

    myProcess1.join()
    myProcess2.join()
 
if __name__ == '__main__':
    main()
Mat90
  • 169
  • 1
  • 9
  • 1
    Where do you start the multiiprocessing? If you have the process ID, then you can use the `signal` module to send the process a `signal.SIGINT`, which is exactly what Ctrl-C does. – Tim Roberts Oct 20 '21 at 21:11
  • @TimRoberts, I added the code that is called in main(). I am wondering how to implement it, as you suggest. Can I transfer the SIGINT (duplicating it) to the child process so that both processes will enter their cleaning up phase? – Mat90 Oct 21 '21 at 17:28
  • If you have a `multiprocessing.Process` object, you should be able to do `os.kill( process.pid, signal.SIGINT)` – Tim Roberts Oct 21 '21 at 19:54
  • Ah, thank you. So I assume I can get the child's process ID by reading the multiprocessing.current_process().pid property, @TimRoberts? Also, I came up with a possible alternative solution, see the initial post, it seems to be working. – Mat90 Oct 21 '21 at 20:04
  • I think `multiprocessing.current_process()` get you YOUR process id. You don't want to kill that! – Tim Roberts Oct 21 '21 at 20:21

0 Answers0