11

(I'm using the pyprocessing module in this example, but replacing processing with multiprocessing should probably work if you run python 2.6 or use the multiprocessing backport)

I currently have a program that listens to a unix socket (using a processing.connection.Listener), accept connections and spawns a thread handling the request. At a certain point I want to quit the process gracefully, but since the accept()-call is blocking and I see no way of cancelling it in a nice way. I have one way that works here (OS X) at least, setting a signal handler and signalling the process from another thread like so:

import processing
from processing.connection import Listener
import threading
import time
import os
import signal
import socket
import errno

# This is actually called by the connection handler.
def closeme():
    time.sleep(1)
    print 'Closing socket...'
    listener.close()
    os.kill(processing.currentProcess().getPid(), signal.SIGPIPE)

oldsig = signal.signal(signal.SIGPIPE, lambda s, f: None)

listener = Listener('/tmp/asdf', 'AF_UNIX')
# This is a thread that handles one already accepted connection, left out for brevity
threading.Thread(target=closeme).start()
print 'Accepting...'
try:
    listener.accept()
except socket.error, e:
    if e.args[0] != errno.EINTR:
        raise
# Cleanup here...
print 'Done...'

The only other way I've thought about is reaching deep into the connection (listener._listener._socket) and setting the non-blocking option...but that probably has some side effects and is generally really scary.

Does anyone have a more elegant (and perhaps even correct!) way of accomplishing this? It needs to be portable to OS X, Linux and BSD, but Windows portability etc is not necessary.

Clarification: Thanks all! As usual, ambiguities in my original question are revealed :)

  • I need to perform cleanup after I have cancelled the listening, and I don't always want to actually exit that process.
  • I need to be able to access this process from other processes not spawned from the same parent, which makes Queues unwieldy
  • The reasons for threads are that:
    • They access a shared state. Actually more or less a common in-memory database, so I suppose it could be done differently.
    • I must be able to have several connections accepted at the same time, but the actual threads are blocking for something most of the time. Each accepted connection spawns a new thread; this in order to not block all clients on I/O ops.

Regarding threads vs. processes, I use threads for making my blocking ops non-blocking and processes to enable multiprocessing.

Petros Koutsolampros
  • 2,790
  • 1
  • 14
  • 20
Henrik Gustafsson
  • 51,180
  • 9
  • 47
  • 60

5 Answers5

3

Isnt that what select is for??

Only call accept on the socket if the select indicates it will not block...

The select has a timeout, so you can break out occasionally occasionally to check if its time to shut down....

  • Not a bad idea per se, but the Listener object does not expose the underlying socket, and I would rather not violate demeters law in such a big way. As it turns out, however is that that is exactly what I have to do :) – Henrik Gustafsson Jan 21 '09 at 13:44
3

I thought I could avoid it, but it seems I have to do something like this:

from processing import connection
connection.Listener.fileno = lambda self: self._listener._socket.fileno()

import select

l = connection.Listener('/tmp/x', 'AF_UNIX')
r, w, e = select.select((l, ), (), ())
if l in r:
  print "Accepting..."
  c = l.accept()
  # ...

I am aware that this breaks the law of demeter and introduces some evil monkey-patching, but it seems this would be the most easy-to-port way of accomplishing this. If anyone has a more elegant solution I would be happy to hear it :)

Henrik Gustafsson
  • 51,180
  • 9
  • 47
  • 60
1

I'm new to the multiprocessing module, but it seems to me that mixing the processing module and the threading module is counter-intuitive, aren't they targetted at solving the same problem?

Anyway, how about wrapping your listen functions into a process itself? I'm not clear how this affects the rest of your code, but this may be a cleaner alternative.

from multiprocessing import Process
from multiprocessing.connection import Listener


class ListenForConn(Process):

    def run(self):
        listener = Listener('/tmp/asdf', 'AF_UNIX')
        listener.accept()

        # do your other handling here


listen_process = ListenForConn()
listen_process.start()

print listen_process.is_alive()

listen_process.terminate()
listen_process.join()

print listen_process.is_alive()
print 'No more listen process.'
monkut
  • 42,176
  • 24
  • 124
  • 155
  • That is more or less what I do, but you send SIGTERM instead of SIGPIPE, and you can't do the cleanup in the context of run(), but rather in a signal handler. – Henrik Gustafsson Dec 11 '08 at 06:20
  • Thanks for the clarification, nothing comes to mind at the moment, if I think of something I'll update my answer. ;) – monkut Dec 11 '08 at 14:18
  • @HenrikGustafsson If you saved all the things you need to cleanup into ivars, couldn't you do cleanup in `__del__`? – MikeyE Jun 02 '18 at 08:27
0

Probably not ideal, but you can release the block by sending the socket some data from the signal handler or the thread that is terminating the process.

EDIT: Another way to implement this might be to use the Connection Queues, since they seem to support timeouts (apologies, I misread your code in my first read).

codelogic
  • 71,764
  • 9
  • 59
  • 54
  • As it is not receive operation I'm trying to cancel I can't really just send data to the connection (there is none) and there is also a race condition there. For the second, that would work if I actually was polling for data, I'm not; I'm waiting for a new connection. Or did I misunderstand you? – Henrik Gustafsson Dec 10 '08 at 22:24
  • Since the Listener is accepting connections on the specified socket, shouldn't connecting to it from another thread release accept() (using multiprocessing.connection.Client)? Apologies for the ambiguity in the 2nd part of my response, I will correct it. – codelogic Dec 10 '08 at 22:59
  • 1
    Queues are not discoverable from external tools etc. Connecting to the socket will release the accepy(), but would create a race, I think. – Henrik Gustafsson Dec 12 '08 at 23:38
  • @HenrikGustafsson Does this really cause a race-condition? If it were, it would be similarly dangerous if multiple processes connect to a listener at the same time, and I think it's a common use case and don't see anything in the documentation against that. ________________________________________________________________________________________ Otherwise, this solution is the most portable/high-level (an implementation can be found in [another answer](https://stackoverflow.com/a/50655251/5267751)). – user202729 Aug 13 '21 at 13:02
0

I ran into the same issue. I solved it by sending a "stop" command to the listener. In the listener's main thread (the one that processes the incoming messages), every time a new message is received, I just check to see if it's a "stop" command and exit out of the main thread.

Here's the code I'm using:

def start(self):
    """
    Start listening
    """
    # set the command being executed
    self.command = self.COMMAND_RUN

    # startup the 'listener_main' method as a daemon thread
    self.listener = Listener(address=self.address, authkey=self.authkey)
    self._thread = threading.Thread(target=self.listener_main, daemon=True)
    self._thread.start()

def listener_main(self):
    """
    The main application loop
    """

    while self.command == self.COMMAND_RUN:
        # block until a client connection is recieved
        with self.listener.accept() as conn:

            # receive the subscription request from the client
            message = conn.recv()

            # if it's a shut down command, return to stop this thread
            if isinstance(message, str) and message == self.COMMAND_STOP:
                return

            # process the message

def stop(self):
    """
    Stops the listening thread
    """
    self.command = self.COMMAND_STOP
    client = Client(self.address, authkey=self.authkey)
    client.send(self.COMMAND_STOP)
    client.close()

    self._thread.join()

I'm using an authentication key to prevent would be hackers from shutting down my service by sending a stop command from an arbitrary client.

Mine isn't a perfect solution. It seems a better solution might be to revise the code in multiprocessing.connection.Listener, and add a stop() method. But, that would require sending it through the process for approval by the Python team.

MikeyE
  • 1,756
  • 1
  • 18
  • 37