the documentation of Apollo-Server states, that Batching + Caching should not be used together with REST-API Datasources:
Most REST APIs don't support batching. When they do, using a batched endpoint can jeopardize caching. When you fetch data in a batch request, the response you receive is for the exact combination of resources you're requesting. Unless you request that same combination again, future requests for the same resource won't be served from cache.
We recommend that you restrict batching to requests that can't be cached. In these cases, you can take advantage of DataLoader as a private implementation detail inside your RESTDataSource [...]
Source: https://www.apollographql.com/docs/apollo-server/data/data-sources/#using-with-dataloader
I'm not sure why they say: "Unless you request that same combination again, future requests for the same resource won't be served from cache.".
Why shouldn't future requests be loaded from cache again? I mean, here we have 2 caching layers. The DataLoader which batches requests and memorizes - with an per-request cache - which objects are requested and return the same object from it's cache if requested multiple times in the whole request.
And we have a 2nd level cache, that caches individual objects over multiple requests (Or at least it could be implemented in a way that it caches the individual objects, not the whole result set). Wouldn't that ensure that feature requests would be served from the second layer cache if the whole request changes but includes some of the objects which were requested in a previous request?