2

I'm trying to use a Flux to stream events to subscribers using RSocket. There can be a huge backlog of events (in the database) and they must be send out in order without any gaps without either flooding the publisher (out of memory) or the consumer. None of the OverflowStrategy's seem suitable:

  • IGNORE: I'd like to block (or get a callback when there's more demand), not get an error
  • ERROR: I'd like to block (or get a callback when there's more demand), not get an error
  • DROP: bad, because events cannot be skipped (no gaps)
  • LATEST: bad, because events cannot be skipped (no gaps)
  • BUFFER: leads to out of memory on publisher

I have everything working, but if I don't limit my rate in the subscribers the publisher side goes out of memory -- that's bad, as a bad subscriber could kill my service. For some reason, I'm misunderstanding how back pressure works. Everywhere I look there is talk of limitRate. This works, but it only works for me on the subscriber side. Using a limitRate on the publisher side has no effect at all.

I've used Flux.generate and Flux.create to create the events I want on the publisher side, but they don't seem to respond to back pressure at all. So I must be missing something as the whole back pressure mechanism in Reactor is described as very transparent and easy to use...

Here's my publisher:

@MessageMapping("events")
public Flux<String> events(String data) {
    Flux<String> flux = Flux.generate(new Consumer<SynchronousSink<String>>() {
        long offset = 0;

        @Override
        public void accept(SynchronousSink<String> emitter) {
            emitter.next("" + offset++);
        }
    });

    return flux.limitRate(100);  // limitRate doesn't do anything
}

And my consumer:

@Autowired RSocketRequester requester;

@EventListener(ApplicationReadyEvent.class)
public void run() throws InterruptedException {
    requester.route("events")
        .data("Just Go")
        .retrieveFlux(String.class)
        //.limitRate(1000)  // commenting this line makes publisher go OOM
        .bufferTimeout(20000, Duration.ofMillis(10))
        .subscribe(new Consumer<List<String>>() {
            long totalReceived = 0;
            long totalBytes = 0;

            @Override
            public void accept(List<String> s) {
                totalReceived += s.size();
                totalBytes += s.stream().mapToInt(String::length).sum();
                System.out.printf("So we received: %4d messages @ %8.1f msg/sec (%d kB/sec)\n", s.size(), ((double)totalReceived / (System.currentTimeMillis() - time)) * 1000, totalBytes / (System.currentTimeMillis() - time));

                try {
                    Thread.sleep(200);  // Delay consumer so publisher has to slow down
                } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }
        });

    Thread.sleep(100000);  // leave Spring running for a bit (dirty)
}

What I don't understand why this wouldn't work. The generate uses a call back, but it keeps getting called as fast as possible leading to huge memory allocations in the JVM and it goes OOM. Why does it keep calling generate?

What am I missing?

john16384
  • 7,800
  • 2
  • 30
  • 44

0 Answers0