53

I wrote a small Python application that runs as a daemon. It utilizes threading and queues.

I'm looking for general approaches to altering this application so that I can communicate with it while it's running. Mostly I'd like to be able to monitor its health.

In a nutshell, I'd like to be able to do something like this:

python application.py start  # launches the daemon

Later, I'd like to be able to come along and do something like:

python application.py check_queue_size  # return info from the daemonized process

To be clear, I don't have any problem implementing the Django-inspired syntax. What I don't have any idea how to do is to send signals to the daemonized process (start), or how to write the daemon to handle and respond to such signals.

Like I said above, I'm looking for general approaches. The only one I can see right now is telling the daemon constantly log everything that might be needed to a file, but I hope there's a less messy way to go about it.

UPDATE: Wow, a lot of great answers. Thanks so much. I think I'll look at both Pyro and the web.py/Werkzeug approaches, since Twisted is a little more than I want to bite off at this point. The next conceptual challenge, I suppose, is how to go about talking to my worker threads without hanging them up.

Thanks again.

hanksims
  • 1,479
  • 2
  • 12
  • 22

8 Answers8

36

Yet another approach: use Pyro (Python remoting objects).

Pyro basically allows you to publish Python object instances as services that can be called remotely. I have used Pyro for the exact purpose you describe, and I found it to work very well.

By default, a Pyro server daemon accepts connections from everywhere. To limit this, either use a connection validator (see documentation), or supply host='127.0.0.1' to the Daemon constructor to only listen for local connections.

Example code taken from the Pyro documentation:

Server

import Pyro.core

class JokeGen(Pyro.core.ObjBase):
        def __init__(self):
                Pyro.core.ObjBase.__init__(self)
        def joke(self, name):
                return "Sorry "+name+", I don't know any jokes."

Pyro.core.initServer()
daemon=Pyro.core.Daemon()
uri=daemon.connect(JokeGen(),"jokegen")

print "The daemon runs on port:",daemon.port
print "The object's uri is:",uri

daemon.requestLoop()

Client

import Pyro.core

# you have to change the URI below to match your own host/port.
jokes = Pyro.core.getProxyForURI("PYROLOC://localhost:7766/jokegen")

print jokes.joke("Irmen")

Another similar project is RPyC. I have not tried RPyC.

Mark Mikofski
  • 19,398
  • 2
  • 57
  • 90
codeape
  • 97,830
  • 24
  • 159
  • 188
  • I think pyro is totally overengineering for this. It gives too much power and freedom, yeah, but introduces a lot of new possible errors in the software. I'd only use pyro if communication between different servers takes place, never locally. You allways have better choices like unix signals, which are much more robust on a local environment. Depending on how complicated your application logic is it may be insufficient. If you need a short of man-in-the-middle (which is what Pyro proxy is under everything) I'd recommend an http server to recieve/send requests. Thats a personal choice though – DGoiko Jan 08 '19 at 16:13
  • Anyway, Good-Old TCP-listening sockets are just enough for this, however, as allways, there are security concerns. I'm making one complex daemon now, and I'm tempted to use Pyro (as the project uses pyro to create a multi-server remote worker pool, so most things are written in Pyro style and serializers are already written. The main class itself inherits from threads and works in the way daemons work, and it is already being called with Pyro a and registered in nameserver, and still with ALL that done I'm reluctant to use it as my local daemon entry point. – DGoiko Jan 08 '19 at 16:20
  • Is `7766` default port number? – alper Aug 09 '21 at 09:53
18

What about having it run an http server?

It seems crazy but running a simple web server for administrating your server requires just a few lines using web.py

You can also consider creating a unix pipe.

Ali Afshar
  • 40,967
  • 12
  • 95
  • 109
fulmicoton
  • 15,502
  • 9
  • 54
  • 74
  • Also +1 for HTTP interface. A python script can parse the command line options and send XMLRPC commands to an internal HTTP Server. – Van Gale Mar 18 '09 at 05:36
  • 1
    +1: HTTP. Embed a little WSGI app in the daemon to respond to requests. – S.Lott Mar 18 '09 at 10:59
  • 3
    (and @VanGale and @S.Lott) could someone please provide a reference/example for running an http server for the purpose of receiving commands like the OP described? I need to do this, but would like a little more detail. – synaptik Mar 05 '17 at 21:13
  • Wouldn't be difficult to get the error trace log using http server? – alper Nov 27 '21 at 14:10
16

Use werkzeug and make your daemon include an HTTP-based WSGI server.

Your daemon has a collection of small WSGI apps to respond with status information.

Your client simply uses urllib2 to make POST or GET requests to localhost:somePort. Your client and server must agree on the port number (and the URL's).

This is very simple to implement and very scalable. Adding new commands is a trivial exercise.

Note that your daemon does not have to respond in HTML (that's often simple, though). Our daemons respond to the WSGI-requests with JSON-encoded status objects.

bstpierre
  • 30,042
  • 15
  • 70
  • 103
S.Lott
  • 384,516
  • 81
  • 508
  • 779
9

I would use twisted with a named pipe or just open up a socket. Take a look at the echo server and client examples. You would need to modify the echo server to check for some string passed by the client and then respond with whatever requested info.

Because of Python's threading issues you are going to have trouble responding to information requests while simultaneously continuing to do whatever the daemon is meant to do anyways. Asynchronous techniques or forking another processes are your only real option.

MrEvil
  • 7,785
  • 7
  • 36
  • 36
  • 1
    +1 for Twisted, see also twisted.manhole that provides a telnet interface directly into the running interpreter: http://twistedmatrix.com/projects/core/documentation/howto/telnet.html – Van Gale Mar 18 '09 at 05:34
  • "[...] you are going to have trouble responding to information requests while simultaneously continuing to do whatever the daemon is meant to do anyways" I find that claim unsupported. If you mean the GIL, it doesn't prevent this kind of concurency at all. – Rafał Dowgird Mar 18 '09 at 09:28
  • If the interpreter has acquired the GIL and is performing some long running operation then of course it's going to prevent the other thread from being serviced. The point is that a layman can't easily predict when the GIL will come into play and cause threading issues. – MrEvil Mar 18 '09 at 17:47
  • To my knowledge, the only possibility to grab the GIL for a "long running operation" is a bug in a C module. In normal circumstances, the GIL is never held for more than 1 Python instruction in a row nor during calls to a C procedure that might be blocking or long running. – Rafał Dowgird Mar 19 '09 at 13:27
  • I've seen this exact issue when using popen calls to PGP command line. So your comment regaurding the GIL only being locked for one instruction is balderdash. Also as the documentation clearly points out that this behavior is non-deterministic. See the Python/C API reference for substantiation. – MrEvil Mar 19 '09 at 21:04
  • My mistake - 100 bytecode instructions, not 1 python instruction. This doesn't change the fact that it cannot be held for a long time (save for buggy C code). As to popen, it is known to cause lockups not because of GIL, but because of sequential (instead of parallel) read/write to pipes by parent. – Rafał Dowgird Mar 20 '09 at 08:55
  • So which of the following doesn't count as "threading issues", is it the GIL, buggy C code, or is it popen causing lockups? All of those issues result in threads unpredictably failing to work, thus necessitating the programmer to either fork a process or use Twisted. – MrEvil Mar 20 '09 at 21:30
  • 1
    Popen causes lockups only if you make the basic mistake of sequential read/write to pipes in parent process. This is true for every language, not only Python. Ditto for not releasing locks before blocking operations. So neither of the above counts as a *python* threading issue. – Rafał Dowgird Mar 21 '09 at 20:31
7
# your server

from twisted.web import xmlrpc, server
from twisted.internet import reactor

class MyServer(xmlrpc.XMLRPC):

    def xmlrpc_monitor(self, params):        
        return server_related_info

if __name__ == '__main__':
    r = MyServer()
    reactor.listenTCP(8080, Server.Site(r))
    reactor.run()

client can be written using xmlrpclib, check example code here.

Badri
  • 2,212
  • 3
  • 25
  • 26
5

Assuming you're under *nix, you can send signals to a running program with kill from a shell (and analogs in many other environments). To handle them from within python check out the signal module.

MarkusQ
  • 21,814
  • 3
  • 56
  • 68
  • Can you send any signal via `kill`? If not, perhaps reword this answer as `kill`, to the best of my knowledge, can only send a 'kill' signal, which isn't particularly useful here – puk Oct 04 '13 at 00:27
  • @puk you actually send other signals with kill using you '-s' parameter, e.g. 'kill -s QUIT '. – Keith Hughitt Oct 15 '13 at 01:59
  • @puk kill is not an actuall kill. It sends the signal you tell it (for instance kill -9, which is the default if I'm not mistaken) to the process. Its called kill for historical purposes, as far as I know – DGoiko Jan 08 '19 at 16:24
5

You could associate it with Pyro (http://pythonhosted.org/Pyro4/) the Python Remote Object. It lets you remotely access python objects. It's easily to implement, has low overhead, and isn't as invasive as Twisted.

Maori
  • 17
  • 1
  • 1
  • 7
directedition
  • 11,145
  • 18
  • 58
  • 79
0

You can do this using multiprocessing managers (https://docs.python.org/3/library/multiprocessing.html#managers):

Managers provide a way to create data which can be shared between different processes, including sharing over a network between processes running on different machines. A manager object controls a server process which manages shared objects. Other processes can access the shared objects by using proxies.

Example server:

from multiprocessing.managers import BaseManager

class RemoteOperations:
    def add(self, a, b):
        print('adding in server process!')
        return a + b

    def multiply(self, a, b):
        print('multiplying in server process!')
        return a * b

class RemoteManager(BaseManager):
    pass

RemoteManager.register('RemoteOperations', RemoteOperations)

manager = RemoteManager(address=('', 12345), authkey=b'secret')
manager.get_server().serve_forever()

Example client:

from multiprocessing.managers import BaseManager

class RemoteManager(BaseManager):
    pass

RemoteManager.register('RemoteOperations')
manager = RemoteManager(address=('localhost', 12345), authkey=b'secret')
manager.connect()

remoteops = manager.RemoteOperations()
print(remoteops.add(2, 3))
print(remoteops.multiply(2, 3))
codeape
  • 97,830
  • 24
  • 159
  • 188