-1

I have a web site on Quart-Trio and Hypercorn. There is a page /search made with Whoosh. Some search operations can take up to ~6 seconds. In production it would probably interfere with the site availability for other users.

So I think I need to run the search as a parallel task, on another processor core.

(Using Trio multitasking doesn't seem to be a good option, as the long execution is caused by data processing, not I/O operations. So I think managing tasks in one thread wouldn't help).

What is the easiest and most efficient way to do multiprocessing there?

I know there's multiprocessing module in Python, and curio, and hypercorn workers, and hypercorn middleware DispatcherMiddleware, and all that information is a bit overwhelming. DispatcherMiddleware looks nice and easy, but would it help, or it's all in the same thread? Or maybe all I need is to start hypercorn with several workers?

Please point me in the right direction, how to solve this small task without becoming a guru in multiprocessing.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
chang zhao
  • 144
  • 5
  • Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. – Community Jun 14 '22 at 03:25
  • 1
    Trio has a run_sync function, https://trio.readthedocs.io/en/stable/reference-core.html?highlight=run_sync#trio.to_thread.run_sync is this appropriate? – pgjones Jun 14 '22 at 14:22
  • @pgjones Yes, I think so. Thank you. Right now I'm going to try multiple workers in production. It seems Centos 7 doesn't like Python > 3.6, so I'm busy with learning Ubuntu ATM :). – chang zhao Jun 14 '22 at 14:29
  • Multiple workers will help, but each worker will block whilst the data heavy code executes which will slow any other requests the worker is also handling. I'd therefore use multiple workers and the run_sync. – pgjones Jun 15 '22 at 09:41

1 Answers1

0

Using hypercorn with multiple workers solves the problem:

hypercorn --config conf/hypercorn.conf -k trio main:app -w 2

Two searches go in parallel.

chang zhao
  • 144
  • 5