1

I have more questions based on this question and answer, which is now quite old but still seems to be accurate.

The suggestion of storing results in memory seems problematic in many scenarios.

  • A web farm where the end-user isn't locked to a specific server.
  • Very large result sets.
  • Smaller result sets with many different users and queries.

I see a few ways of handling paging based on what I've read so far.

  • Use OFFSET and LIMIT at potentially high RU costs.
  • Use continuation tokens and caches with the scaling concerns.
  • Save the continuation tokens themselves to go back to previous pages.
    • This can get complex since there may not be a one to one relationship between tokens and pages.

      See Understanding Query Executions

      In addition, there are other reasons that the query engine might need to split query results into multiple pages. These include:

      • The container was throttled and there weren't available RUs to return more query results
      • The query execution's response was too large
      • The query execution's time was too long
      • It was more efficient for the query engine to return results in additional executions

Are there any other, maybe newer options for paging?

drs9222
  • 4,448
  • 3
  • 33
  • 41
  • Looks like a good summary. I'm not aware of any alternatives. In my current project I ended up just giving up the idea of arbitrary page access and designed the API to have clients traverse the continuation chain until they have all items for the query. OFFSET/LIMIT could work for modest numbers but not so much as item count grows. – Noah Stahl Aug 22 '20 at 16:05

0 Answers0