2

I'm currently researching the resilience4j library and for some reason the following code doesn't work as expected:

@Test
public void testRateLimiterProjectReactor()
{
    // The configuration below will allow 2 requests per second and a "timeout" of 2 seconds.
    RateLimiterConfig config = RateLimiterConfig.custom()
                                                .limitForPeriod(2)
                                                .limitRefreshPeriod(Duration.ofSeconds(1))
                                                .timeoutDuration(Duration.ofSeconds(2))
                                                .build();

    // Step 2.
    // Create a RateLimiter and use it.
    RateLimiterRegistry registry = RateLimiterRegistry.of(config);
    RateLimiter rateLimiter = registry.rateLimiter("myReactorServiceNameLimiter");

    // Step 3.
    Flux<Integer> flux = Flux.from(Flux.range(0, 10))
                                .transformDeferred(RateLimiterOperator.of(rateLimiter))
                                .log()

        ;

    StepVerifier.create(flux)
                .expectNextCount(10)
                .expectComplete()
                .verify()
    ;
}

According to the official examples here and here this should be limiting the request() to 2 elements per second. However, the logs show it's fetching all of the elements immediately:

15:08:24.587 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
15:08:24.619 [main] INFO reactor.Flux.Defer.1 - onSubscribe(RateLimiterSubscriber)
15:08:24.624 [main] INFO reactor.Flux.Defer.1 - request(unbounded)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(0)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(1)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(2)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(3)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(4)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(5)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(6)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(7)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(8)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(9)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onComplete()

I don't see what's wrong?

Seonghyeon Cho
  • 171
  • 1
  • 3
  • 11
tftd
  • 16,203
  • 11
  • 62
  • 106
  • 3
    resilience4j rate limiter limits the number of subscriptions rather than elements – Martin Tarjányi Apr 26 '21 at 20:28
  • 1
    Flux also has built-in rate limiter as an alternative: https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#limitRate-int- – Martin Tarjányi Apr 26 '21 at 20:31
  • Thanks for confirming it! Yeah I've been looking at the code for the past few hours and noticed that in the javadocs as well. My issue with `limitRate` is that it's fixed - once you assembly the `Flux` you cannot change the rates which is why I was trying Resilience4j. This is a continuation to https://stackoverflow.com/questions/67133878/how-to-improve-insert-performance-in-spring-data-reactive-cassandra and I'm currently researching ways you can gradually reduce the limitRate in some way once too many errors/retries start piling. Any ideas? – tftd Apr 26 '21 at 20:55
  • P.S. feel free to answer the original question (it indeed limits subscriptions). – tftd Apr 26 '21 at 20:56

2 Answers2

4

As already answered in comments above RateLimiter tracks the number of subscriptions, not elements. To achieve rate limiting on elements you can use limitRate (and buffer + delayElements). For example,

        Flux.range(1, 100)
                .delayElements(Duration.ofMillis(100)) // to imitate a publisher that produces elements at a certain rate
                .log()
                .limitRate(10) // used to requests up to 10 elements from the publisher
                .buffer(10) // groups integers by 10 elements
                .delayElements(Duration.ofSeconds(2)) // emits a group of ints every 2 sec
                .subscribe(System.out::println);

Alex K.
  • 714
  • 4
  • 14
1

If you want to get fancy, you can use Bucket4J in a manner similar to this:

    Bucket bucket = Bucket.builder()
        .addLimit(Bandwidth.simple(1, Duration.ofSeconds(1L)))
        .withNanosecondPrecision()
        .build();

    ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);

    Flux.generate(() -> 0, (i, s) -> {
        s.next(i);
        return i + 1;
    })
        .take(10)
        .concatMap(i -> Mono.fromFuture(() -> {
            if (bucket.tryConsume(1)) {
                return CompletableFuture.completedFuture(i);
            }
            return bucket.asScheduler().consume(1, executor).thenApply(v -> i);
        }), 1)
        .doOnNext(i -> log.info("Next value = {}", i))
        .blockLast();

The concatMap delays emitting elements downstream until the rate limit is satisfied.

Checking for the token immediately available is an optimization that prevents unnecessary thread hopping.

Specifying preFetch of 1 in concatMap prevents it from requesting a lot of messages that it cannot yet emit.