When using Graphene's Relay Connection field, it's very easy to paginate though a queryset. The client passes the arguments (first
, after
, before
, last
), then by returning a queryset the resolver paginates automatically.
To me, a super powerful pattern here would be if you could cache (globally, using Memcached/Redis etc) the queryset for whatever combination of arguments passed. This would make pagination very slick, efficient, scalable.
Here's an example where I've tried to do this:
# Create a memoized queryset using cache_memoize, but could be any library
from cache_memoize import cache_memoize
# This will create a cache key using the arguments passed to the function
cache_memoize(5000)
def get_paginated_my_model(**kwargs):
# How do I pass the arguments here?
return MyModel.objects.all()
# Then...in our resolver
def resolve_my_models(self, info, **kwargs):
print(kwargs) # e.g. {'after': 'YXJyYXljb25uZWN0aW9uOjE=', 'first': 5}
return get_paginated_my_model(kwargs)
This works in terms of creating the cache key, but the stored queryset here will always be the full queryset. Resolvers seem to implement the slicing of the queryset using some magic behind the scenes.
Is there a way of achieving something similar? Is it possible to use the slicing logic graphene uses somehow?