0

I want to serve an falcon API that can handle multiple simultaneous user requests. Each request triggers a long processing task therefore I used the ThreadPoolExecutor from concurrent.futures as follows:

import falcon
from concurrent.futures import ThreadPoolExecutor

executor = ThreadPoolExecutor(max_workers=10)

class Resource:

    def on_post(self, req, resp):
        
        def some_long_task():
            # here is the code for the long task
            
        executor.submit(some_long_task)
        
        resp.body = 'OK'
        resp.status = falcon.HTTP_201
        
app = falcon.App()

resource = Resource()

app.add_route('/', resource)

# Serve...

I am using gunicorn to serve the API, with the following parameters: gunicorn main:app --timeout 10000.

When I execute 2 requests to the API successively, both long tasks are triggered in background in a row. However, as soon as the first long task launched is finished, it stops the execution of the second one. How can I avoid this?

erup
  • 183
  • 3
  • 12

1 Answers1

1

By replacing your long running task with time.sleep(10), I'm unable to reproduce your problem:

import logging
import time
import uuid

import falcon
from concurrent.futures import ThreadPoolExecutor

logging.basicConfig(
    format='%(asctime)s [%(levelname)s] %(message)s', level=logging.INFO)
executor = ThreadPoolExecutor(max_workers=10)


class Resource:

    def on_post(self, req, resp):

        def some_long_task():
            # here is the code for the long task

            time.sleep(10)
            logging.info(f'[task {taskid}] complete')

        taskid = str(uuid.uuid4())
        executor.submit(some_long_task)
        logging.info(f'[task {taskid}] submitted')

        resp.media = {'taskid': taskid}
        resp.status = falcon.HTTP_ACCEPTED


app = falcon.App()
resource = Resource()
app.add_route('/', resource)

As expected, all tasks are correctly run to completion:

[2021-11-26 21:45:25 +0100] [8242] [INFO] Starting gunicorn 20.1.0
[2021-11-26 21:45:25 +0100] [8242] [INFO] Listening at: http://127.0.0.1:8000 (8242)
[2021-11-26 21:45:25 +0100] [8242] [INFO] Using worker: sync
[2021-11-26 21:45:25 +0100] [8244] [INFO] Booting worker with pid: 8244
2021-11-26 21:45:29,565 [INFO] [task 5b45b1f5-15ac-4628-94d8-3e1fd0710d21] submitted
2021-11-26 21:45:31,133 [INFO] [task 4553e018-cfc6-4809-baa4-f873579a9522] submitted
2021-11-26 21:45:33,724 [INFO] [task c734d89e-5f75-474c-ad78-59f178eef823] submitted
2021-11-26 21:45:39,575 [INFO] [task 5b45b1f5-15ac-4628-94d8-3e1fd0710d21] complete
2021-11-26 21:45:41,142 [INFO] [task 4553e018-cfc6-4809-baa4-f873579a9522] complete
2021-11-26 21:45:43,735 [INFO] [task c734d89e-5f75-474c-ad78-59f178eef823] complete

Could the problem instead lie in your long task's code?

There are some non-trivial pitfalls to watch out for:

  • Gunicorn uses a variation of the pre-forking server design. If you happen to perform advanced setup such as launching threads, or opening file handles, things might break when Gunicorn forks a worker. See also: Gunicorn: multiple background worker threads. Ideally, you should initialize your executor only after forking.
  • Submitting work to an executor in this fashion, and not checking the actual outcome, potentially masks exceptions in your tasks, which could look as stopped execution. Maybe finishing the first task somehow provokes an exception in the second? Try surrounding your task with a try... except, and log exceptions.
  • I've never run into this myself, but it seems that importing from parallel threads might cause a deadlock, see, e.g., ThreadPoolExecutor + Requests == deadlock? This shouldn't normally be an issue, but it could be caused by parallel tasks attempting to import plugins at runtime, as in the requests' case.
Vytas
  • 754
  • 5
  • 14