I have a fully reactive web app that aggregates the information from two other backend-services.
Incoming request -> sends request to service A and B -> aggregates responses -> response is emitted.
pseudocode:
public Mono<ResponseEntity<List<String>>> getValues() {
return Mono.zip(getValuesA(), getValuesB(),
(a, b) -> Stream.concat(a.stream(), b.stream()).collect(Collectors.toList()))
.map(result -> ResponseEntity.ok(result));
}
public Mono<String> getValuesA() {
return webClient.get()
.uri(uriA)
.retrieve()
.bodyToMono(new ParameterizedTypeReference<>() {});
}
// getValuesB same as A, but with uriB.
Because of the high request frequency, I want to bundle requests to the backend-services. I thought using Sinks would be the right way to go. A sink is returned as mono to every requesting party. After a threshold of 10 requests has been exceeded, the request will be handled and the response will be emitted to every sink.
public Mono<ResponseEntity<List<String>>> getValues() {
return Mono.zip(getValuesA(), getValuesB(),
(a, b) -> Stream.concat(a.stream(), b.stream()).collect(Collectors.toList()))
.map(result -> ResponseEntity.ok(result));
}
public Mono<String> getValuesA() {
Sink.One<List<String>> sink = Sinks.one();
queue.add(sink);
if(queue.size() > 10) {
webClient.get()
.uri(uriA)
.retrieve()
.bodyToMono(new ParameterizedTypeReference<>() {})
.subscribe(response -> {
for(Sink.One<List<String>> sinkItem : queue) {
sink.tryEmitValue(response);
}
});
}
return sink.asMono();
}
// getValuesB same as A, but with uriB.
The problem in this code is the 'subscribe' part. As soon as we're subscribing to the webclient's response, it will block the thread. This will only happen in 10% of the requests, but this is already too much for an endpoint that's being called very frequently. What can I do to 'unblock' this part. If using sinks wasn't the best choice, what could have been a better one?
PS. All pseudocode used is NOT production code. It may have many flaws and it is only meant to visualize the problem I'm facing at this moment.