3

I've got the following policy setup:

// Slack api for postMessage is generally 1 per channel per second.
// Some workspace specific limits may apply.
// We just limit everything to 1 second.
var slackApiRateLimitPerChannelPerSecond = 1;

var rateLimit = Policy.RateLimitAsync(slackApiRateLimitPerChannelPerSecond, TimeSpan.FromSeconds(slackApiRateLimitPerChannelPerSecond),
    (retryAfter, _) => retryAfter.Add(TimeSpan.FromSeconds(slackApiRateLimitPerChannelPerSecond)));

This should:

  • Rate limit requests till 1 req/s
  • Retry when rate limited

I can't wrap my head around wrapping this into a second policy that would retry...

I could retry this like so:

try
{
   _policy.Execute(...)
}
catch(RateLimitedException ex)
{
   // Policy.Retry with ex.RetryAfter
}

But that does not seem right.

I'd like to retry this a couple (3?) times so the method is abit more resilient - how would i do that?

Peter Csala
  • 17,736
  • 16
  • 35
  • 75
sommmen
  • 6,570
  • 2
  • 30
  • 51

2 Answers2

4

I might be late to the party but let me put in my 2 cents.

Rate limiter

This policy's engine implements the token bucket algorithm in a lock-free fashion. This has an implication so, it does not work as you might intuitively think.

For instance from this policy perspective 1 request / second is the same as 60 requests / minute.
In reality the latter should not impose even distribution (but it does)! So, you can't use it like this:

  • issue 50 requests in the first 10 seconds
  • 45 seconds without any requests
  • in the last 5 seconds 9 more requests can be issued without reaching the limit

Rate limiter as shared policy

In case of Polly most of the policies are stateless. This means two executions do not need to share anything.

But in case of Circuit Breaker there is a state inside a Controller. So, you should use the same instance across multiple executions.

In case of Bulkhead and Rate Limiter policies the state are not so obvious. They are hidden inside the implementation. But the same rule applies here, you should share the same policy instance between multiple threads to achieve the desired outcome.

Rate limiter vs Rate gate

Rate limiter itself can be used both on client and server-side. Server-side can proactive refuse too many requests to mitigate over-flooding. Whereas client-side can proactively self-restrict the outgoing requests to obey to the contract between server and client.

This policy is more suitable for server-side (see the RetryAfter property). On the client side a rate gate implementation might be more appropriate which delays outgoing requests by utilizing queues and timers.

Rate limiter with retry

If retry and rate limiter both live on client-side

var retryPolicy = Policy
    .Handle<RateLimitRejectedException>()
    .WaitAndRetry(
        3,
        (int _, Exception ex, Context __) => ((RateLimitRejectedException)ex).RetryAfter,
        (_, __, ___, ____) => { });

If retry resides on client-side whereas rate limiter on server-side

var retryPolicy = Policy<HttpResponseMessage>
    .HandleResult(res => res.StatusCode == HttpStatusCode.TooManyRequests)
    .WaitAndRetry(
        3,
        (int _, DelegateResult<HttpResponseMessage> res, Context __)
            => res.Result.Headers.RetryAfter.Delta ?? TimeSpan.FromSeconds(0));
Peter Csala
  • 17,736
  • 16
  • 35
  • 75
1

You can omit the factory and wrap the rate-limiting policy into another one:

var ts = TimeSpan.FromSeconds(1);
var rateLimit = Policy.RateLimit(1, ts);
var policyWrap = Policy.Handle<RateLimitRejectedException>()
    .WaitAndRetry(3, _ => ts) // note that you might want to use more advanced back off policy here 
    .Wrap(rateLimit);
policyWrap.Execute(...);

If you want to respect the returned RetryAfter then try-catch approach is way to go, based on the documentation example:

public async Task SearchAsync(string query, HttpContext httpContext)
{
    var rateLimit = Policy.RateLimitAsync(20, TimeSpan.FromSeconds(1), 10);

    try
    {
        var result = await rateLimit.ExecuteAsync(() => TextSearchAsync(query));

        var json = JsonConvert.SerializeObject(result);

        httpContext.Response.ContentType = "application/json";
        await httpContext.Response.WriteAsync(json);
    }
    catch (RateLimitRejectedException ex)
    {
        string retryAfter = DateTimeOffset.UtcNow
            .Add(ex.RetryAfter)
            .ToUnixTimeSeconds()
            .ToString(CultureInfo.InvariantCulture);

        httpContext.Response.StatusCode = 429;
        httpContext.Response.Headers["Retry-After"] = retryAfter;
    }
}

UPD

There is WaitAndRetry overload with sleepDurationProvider which also passes the exception, so it can be used for the Wrap approach:

var policyWrap = Policy.Handle<RateLimitRejectedException>()
    .WaitAndRetry(5, 
        sleepDurationProvider: (_, ex, _) => (ex as RateLimitRejectedException)?.RetryAfter.Add(TimeSpan.From....) ?? TimeSpan.From...,
        onRetry:(ex, _, i, _) => { Console.WriteLine($"retry: {i}"); }) 
    .Wrap(rateLimit);
Guru Stron
  • 102,774
  • 10
  • 95
  • 132
  • I'm not too familiar with polly - does .wrap() mean, the rate limit will be first, then the retry? so that it will respect the rate limit? – sommmen Mar 15 '23 at 14:13
  • @sommmen yes, it will use the rate limiter and retry if it fails. – Guru Stron Mar 15 '23 at 14:14
  • @sommmen also note that `retryAfterFactory` accepted by the overload is `Func`, i.e. it should be the same as the result of action. – Guru Stron Mar 15 '23 at 14:19