3

On Heroku, and using the Play Framework, is it necessary to set up a background job processor (using Akka/RabbitMQ/etc.) in order to perform heavy tasks*?

I know that Play offers the ability to perform tasks asynchronously within requests, but would this be enough to avoid requiring a background job processor? On non-Heroku-deployed standalone Play apps, the asynchronicity features make it possible to do everything all in one process, but on Heroku apps, it seems like it would not be enough: according to the book Professional Heroku Programming, (page 254, in the Developing with Ruby section), the web dyno is blocked between the time a request is received and a response is delivered, and all other requests are queued during this time.

If a background job processor is necessary, are there any examples? I've seen examples of Play and Akka, and Play and Heroku, but not all three together.

*(By heavy tasks, I generally mean potentially long-running tasks that require an answer, such as the result of a complex database query or web-service call to be given to the end user, as opposed to fire-and-forget things like sending emails).

kes
  • 5,983
  • 8
  • 41
  • 69
  • On the current Cedar (*.herokuapp.com) stack where Play apps run, [simultaneous connections are supported](https://devcenter.heroku.com/articles/http-routing#simultaneous-connections). The web dyno blocking you mentioned is only on the older Bamboo stack. – ryanbrainard Feb 10 '13 at 06:54

1 Answers1

3

You don't need an explicit worker when using Play. The common pattern with Play 2 is to use an Async response in a controller and Akka for longer running processes. All the examples on the Play Framework website should work out of the box on Heroku.

Naaman Newbold
  • 3,324
  • 1
  • 21
  • 12
  • Would using an Async response be suitable for tasks that take longer than 30 seconds to complete (i.e. the time for the web dyno to time out)? – kes Feb 28 '13 at 02:41
  • 2
    Sorry, just now saw your response. If the request takes longer than 30 seconds, then you can do a couple things: * Send a keep-alive every 20 seconds or so. This will keep the connection open at the router layer. * Use a queue and have the client poll. We use Redis to accomplish this. We immediately respond with a UUID, stuff that in Redis, return a 202 with a Location header that contains the location where the client can poll, then when the work is complete, the Future updates Redis and the result is sent back to the client on the next poll. – Naaman Newbold Mar 19 '13 at 17:15