This thread is a continuation of the Github issue at: https://github.com/spring-projects/spring-data-r2dbc/issues/194
Context:
Hi,
I just tried a very simple Exemple, based on two reactive repositories:
Given br
, a r2dbc crud repo, and cr
, another r2dbc crud repo:
br.findAll()
.flatMap(br -> {
return cr.findById(br.getPropertyOne())
.doOnNext(c -> br.setProperty2(c))
.thenReturn(br);
})
.collectList().block();
This code samples never completes (only the 250 first, or so, entries reach the .collectList
operator). After some digging, adding some onBackpressureXXX
operator after the findAll
seems to "fix" the issue by... well, dropping elements or buffering them.
At this point, my understanding is that the r2dbc reactive repositories doesn't uses the consumer feedback mechanism which removes a significant part of r2dbc's benefits.
Am I wrong ? Is there any better way to achieve the same objective ?
Thanks !
Suggestion from @mp911de:
Avoid stream creation while another stream is active (Famous quote: Do not cross the streams) as general rule.
If you want to fetch related data, then ideally collect all results as List and then run subqueries. This way, the initial response stream is consumed and the connection is free to fetch additional results.
Something like the following snippet should do the job:
br.findAll().collectList()
.flatMap(it -> {
List<Mono<Reference>> refs = new ArrayList<>();
for (Person p : it) {
refs.add(cr.findById(br.getPropertyOne()).doOnNext(…));
}
return Flux.concat(refs).thenReturn(it);
});
But this removes the benefit of streaming the data without keeping it all in memory (my final step not being to list but to stream-write to output to some file).
Any help on this one ?