12

I'm contemplating writing a web application with Rails. Each request made by the user will depend on an external API being called. This external API can randomly be very slow (2-3 seconds), and so obviously this would impact an individual request.

During this time when the code is waiting for the external API to return, will further user requests be blocked?

Just for further clarification as there seems to be some confusion, this is the model I'm anticipating:

  1. Alice makes request to my web app. To fulfill this, a call to API server A is made. API server A is slow and takes 3 seconds to complete.
  2. During this wait time when the Rails app is calling API server A, Bob makes a request which has to make a request to API server B.

Is the Ruby (1.9.3) interpreter (or something in the Rails 3.x framework) going to block Bob's request, requiring him to wait until Alice's request is done?

Matty
  • 33,203
  • 13
  • 65
  • 93
  • I have similar question actually..but i'm just wondering, do you make DB queries from your rails app through activerecord, or the data is purely driven from backend API server which make the DB query and return the result to your rails front-end? – Benny Tjia Feb 07 '12 at 02:20
  • @BennyTjia Both. An individual request will use both locally stored data and unique data returned by the API server. – Matty Feb 07 '12 at 09:29

3 Answers3

8

If you only use one single-threaded, non-evented server (or don't use evented I/O with an evented server), yes. Among other solutions using Thin and EM-Synchrony will avoid this.

Elaborating, based on your update:

No, neither Ruby nor Rails is going to cause your app to block. You left out the part that will, though: the web server. You either need multiple processes, multiple threads, or an evented server coupled with doing your web service requests with an evented I/O library.

@alexd described using multiple processes. I, personally, favor an evented server because I don't need to know/guess ahead of time how many concurrent requests I might have (or use something that spins up processes based on load.) A single nginx process fronting a single thin process can server tons of parallel requests.

smparkes
  • 13,807
  • 4
  • 36
  • 61
  • 3
    _Network_ latency? If you mean the current user will be forced to wait, of course they will (but I assume from the way he wrote the question, that's unavoidable). I read the question as not about avoiding latency for the user hitting the external service, but about blocking other concurrent requests. – smparkes Jan 20 '12 at 21:19
  • @smparkes I've added further clarification to the question regarding this – Matty Jan 20 '12 at 21:32
3

The answer to your question depends on the server your Rails application is running on. What are you using right now? Thin? Unicorn? Apache+Passenger?

I wholeheartedly recommend Unicorn for your situation -- it makes it very easy to run multiple server processes in parallel, and you can configure the number of parallel processes simply by changing a number in a configuration file. While one Unicorn worker is handling Alice's high-latency request, another Unicorn worker can be using your free CPU cycles to handle Bob's request.

Alex D
  • 29,755
  • 7
  • 80
  • 126
0

Most likely, yes. There are ways around this, obviously, but none of them are easy.

The better question is, why do you need to hit the external API on every request? Why not implement a cache layer between your Rails app and the external API and use that for the majority of requests?

This way, with some custom logic for expiring the cache, you'll have a snappy Rails app and still be able to leverage the external API service.

Srdjan Pejic
  • 8,152
  • 2
  • 28
  • 24