In simple case you would just manage server.thread_pool
configuration parameter as it was mentioned in comments to the question.
In real case it depends on many factors. But what I can say for sure is that CherryPy is a threaded server and only one thread runs at a time because of Python GIL. It may be not a big issue for an IO-bound workload, though you can anyway take advantage of you CPU cores running many CherryPy processes of the same application. It may dictate some design decisions like avoiding in-process caching and in general following shared nothing architecture so your processes can be used interchangeably.
Having many application instances makes maintenance more complicated so you should consider pro and cons. OK, here follows example which can give you some clues.
mp.py -- CherryPy app
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cherrypy
class App:
@cherrypy.expose
def index(self):
'''Make some traffic'''
return ('Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean quis laoreet urna. '
'Integer vitae volutpat neque, et tempor quam. Sed eu massa non libero pretium tempus. '
'Quisque volutpat aliquam lacinia. Class aptent taciti sociosqu ad litora torquent per '
'conubia nostra, per inceptos himenaeos. Quisque scelerisque pellentesque purus id '
'vulputate. Suspendisse potenti. Vestibulum rutrum vehicula magna et varius. Sed in leo'
' sit amet massa fringilla aliquet in vitae enim. Donec justo dolor, vestibulum vitae '
'rhoncus vel, dictum eu neque. Fusce ac ultrices nibh. Mauris accumsan augue vitae justo '
'tempor, non ullamcorper tortor semper. ')
cherrypy.tree.mount(App(), '/')
srv8080.ini -- first instance config
[global]
server.socket_host = '127.0.0.1'
server.socket_port = 8080
server.thread_pool = 32
srv8081.ini -- second instance config
[global]
server.socket_host = '127.0.0.1'
server.socket_port = 8081
server.thread_pool = 32
proxy.conf -- nginx config
upstream app {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Put mp.py
and *.ini
in a directory. Add *.conf
to nginx's sites-enabled
, reload it. Open directory with mp.py
in two terminals. Then run cherryd -e production -i mp -c ./srv8080.ini
in first, cherryd -e production -i mp -c ./srv8081.ini
in second.
Now you can play with it. I run the following on my development machine (Linux Mint 15, Core i5 x2 + HT).
ab -c 1 -n 12800 -k http://127.0.0.1:8080/ # ~1600 rps
ab -c 16 -n 12800 http://127.0.0.1:8080/ # ~400 rps
ab -c 32 -n 12800 http://127.0.0.1/ # ~1500 rps