4

Out of curiosity I put together a simple CherryPy server with the following code that sleeps for 5 seconds (as a mock processing delay) and then returns a simple 'hello'.

import cherrypy
import time

class server_runner(object):
  @cherrypy.expose
  def api(self, url):
    time.sleep(5)
    return "hello"

if __name__ == '__main__':
    cherrypy.server.socket_host = '0.0.0.0'
    cherrypy.quickstart(server_runner())

I ran a simple load test (results here https://i.stack.imgur.com/Aqw1F.png), and the application appeared to stay consistent in response times (blue) until the 27th active user (green line shows active user counts): where response time quickly escalated. I'm a little confused as to how CherryPy can be labeled a "production-ready" server if 27 users can't be handled without major latency. Is there something wrong in my implementation or understanding? This is running on a C3 large Ec2 instance.

  • what are the [`server.thread_pool`](http://docs.cherrypy.org/en/latest/pkg/cherrypy.html?highlight=thread_pool#cherrypy._cpserver.Server.thread_pool) configurations? – behzad.nouri Jul 10 '14 at 23:47
  • thanks for the quick reply behzad - at the time of the writing of the question it was default: 10. I read up a bit, and changed to 100, this appears to have helped: http://i.imgur.com/H8igGhu.png. Do you know what kind of limitations/diminishing returns are in place with thread_pool configurations? – RonniePythonist Jul 11 '14 at 02:06
  • I think the last comment under [this](http://stackoverflow.com/a/2685479/625914) answer explains it very well. – behzad.nouri Jul 11 '14 at 10:12

1 Answers1

3

In simple case you would just manage server.thread_pool configuration parameter as it was mentioned in comments to the question.

In real case it depends on many factors. But what I can say for sure is that CherryPy is a threaded server and only one thread runs at a time because of Python GIL. It may be not a big issue for an IO-bound workload, though you can anyway take advantage of you CPU cores running many CherryPy processes of the same application. It may dictate some design decisions like avoiding in-process caching and in general following shared nothing architecture so your processes can be used interchangeably.

Having many application instances makes maintenance more complicated so you should consider pro and cons. OK, here follows example which can give you some clues.

mp.py -- CherryPy app

#!/usr/bin/env python
# -*- coding: utf-8 -*-


import cherrypy


class App:

  @cherrypy.expose
  def index(self):
    '''Make some traffic'''  
    return ('Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean quis laoreet urna. '
      'Integer vitae volutpat neque, et tempor quam. Sed eu massa non libero pretium tempus. '
      'Quisque volutpat aliquam lacinia. Class aptent taciti sociosqu ad litora torquent per '
      'conubia nostra, per inceptos himenaeos. Quisque scelerisque pellentesque purus id '
      'vulputate. Suspendisse potenti. Vestibulum rutrum vehicula magna et varius. Sed in leo'
      ' sit amet massa fringilla aliquet in vitae enim. Donec justo dolor, vestibulum vitae '
      'rhoncus vel, dictum eu neque. Fusce ac ultrices nibh. Mauris accumsan augue vitae justo '
      'tempor, non ullamcorper tortor semper. ')


cherrypy.tree.mount(App(), '/')

srv8080.ini -- first instance config

[global]
server.socket_host = '127.0.0.1'
server.socket_port = 8080
server.thread_pool = 32

srv8081.ini -- second instance config

[global]
server.socket_host = '127.0.0.1'
server.socket_port = 8081
server.thread_pool = 32

proxy.conf -- nginx config

upstream app {
  server 127.0.0.1:8080;
  server 127.0.0.1:8081;
}

server {

    listen  80;

    server_name  localhost;

    location / {
      proxy_pass        http://app;
      proxy_set_header  Host             $host;
      proxy_set_header  X-Real-IP        $remote_addr;
      proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
    }

}

Put mp.py and *.ini in a directory. Add *.conf to nginx's sites-enabled, reload it. Open directory with mp.py in two terminals. Then run cherryd -e production -i mp -c ./srv8080.ini in first, cherryd -e production -i mp -c ./srv8081.ini in second.

Now you can play with it. I run the following on my development machine (Linux Mint 15, Core i5 x2 + HT).

ab -c 1 -n 12800 -k http://127.0.0.1:8080/ # ~1600 rps
ab -c 16 -n 12800 http://127.0.0.1:8080/   # ~400  rps
ab -c 32 -n 12800 http://127.0.0.1/        # ~1500 rps  
saaj
  • 23,253
  • 3
  • 104
  • 105