2

I'm using a rate limiter to throttle the number of requests that are routed

The requests are sent to a channel, and I want to limit the number that are processed per second but i'm struggling to understand if i'm setting this correctly, I don't get an error, but i'm unsure if i'm even using the rate limiter

This is what is being added to the channel:

type processItem struct {
    itemString string
}

Here's the channel and limiter:

itemChannel := make(chan processItem, 5)
itemThrottler := rate.NewLimiter(4, 1) //4 a second, no more per second (1)
var waitGroup sync.WaitGroup

Items are added to the channel:

case "newItem":
    waitGroup.Add(1)
    itemToExec := new(processItem)
    itemToExec.itemString = "item string"
    itemChannel <- *itemToExec

Then a go routine is used to process everything that is added to the channel:

go func() {
    defer waitGroup.Done()
    err := itemThrottler.Wait(context.Background())
    if err != nil {
        fmt.Printf("Error with limiter: %s", err)
        return
    }
    for item := range itemChannel {
        execItem(item.itemString) // the processing function
    }
    defer func() { <-itemChannel }()
}()
waitGroup.Wait()

Can someone confirm that the following occurs:

  • The execItem function is run on each member of the channel 4 times a second

I don't understand what "err := itemThrottler.Wait(context.Background())" is doing in the code, how is this being invoked?

1 Answers1

3

... i'm unsure if i'm even using the rate limiter

Yes, you are using the rate-limiter. You are rate-limiting the case "newItem": branch of your code.

I don't understand what "err := itemThrottler.Wait(context.Background())" is doing in the code

itemThrottler.Wait(..) will just stagger requests (4/s i.e. every 0.25s) - it does not refuse requests if the rate is exceeded. So what does this mean? If you receive a glut of 1000 requests in 1 second:

  • 4 requests will be handled immediately; but
  • 996 requests will create a backlog of 996 go-routines that will block

The 996 will unblock at a rate of 4/s and thus the backlog of pending go-routines will not clear for another 4 minutes (or maybe longer if more requests come in). A backlog of go-routines may or may not be what you want. If not, you may want to use Limiter.Allow - and if false is returned, then refuse the request (i.e. don't create a go-routine) and return a 429 error (if this is a HTTP request).

Finally, if this is a HTTP request, you should use it's imbedded context when calling Wait e.g.

func (a *app) myHandler(w http.ResponseWriter, r *http.Request) {
    // ...

    err := a.ratelimiter(r.Context())

    if err != nil {
        // client http request most likely canceled (i.e. caller disconnected)
    }
}
colm.anseo
  • 19,337
  • 4
  • 43
  • 52
  • To add some context, I want to not spam our mongod (mongo server) instance with item. Items are basically queries to execute. Our frontend has a dashboard with say 10 charts on it, that's 10 queries sent. Setting the rate limiter to 5,1 would mean execute 5 now, then 5 per second? So it should take ~2 seconds? – SuperSecretAndNotSafeFromWork Jun 29 '20 at 12:04
  • The second parameter of the limiter is "if limiter does it's process of the items in the channel faster than the rate, how many can it do?", is that right? – SuperSecretAndNotSafeFromWork Jun 29 '20 at 12:07
  • 1
    1Q. Yes. 2Q. The second parameter of `NewLimiter` you mean? The 2nd parameter is the burst limit. To use the example above, this could be 1000 - i.e. we are allowing 1000 goroutines to run concurrently, but will limit anything beyond that (i.e. handle this one off burst of requests, but typically requests should be much less frequent). – colm.anseo Jun 29 '20 at 12:21
  • @colm.anseo curious.. What's the purpose of passing imbedded context when calling Wait? – thiago Jul 07 '22 at 02:25
  • 1
    @thiago if the client making the REST call disconnects or the client cancels the request, the server side can detect this early using `r.Context()`. This allows the handler to free up resources (including rate limit settings in this case) and exit early - since it's pointless returning any response to a client that is no longer listening. – colm.anseo Jul 07 '22 at 02:55
  • @colm.anseo does it also imply that, for each HTTP request, a new rate limit instance is created with this settings: rate.NewLimiter(4, 1) ? – thiago Jul 07 '22 at 04:06