1

Possible Duplicate:
Slow pagination over tons of records in mongo

I created a simple test:

> db.t.count()
7852054
> db.t.find().skip( 1500000 ).limit(1)
{ "_id" : ObjectId("4fc078aa82618808f416e372"), "value" : 1500000 }
>

To do paging using skip and limit takes too long in huge collections.

Is there any better way to do that?

Community
  • 1
  • 1
truease.com
  • 1,261
  • 2
  • 17
  • 30
  • 1
    I have another application keep on inserting values: for i in xrange(0,99999999): db.t.insert( {"value":i} ) so that skip 1500000 takes more than 5 seconds. After I stop the inserting application. skip 1500000 response within one second. But skip 12000010 tasks about 5 seconds. – truease.com May 26 '12 at 06:44
  • could you explain what exactly you are trying to do? – Asya Kamsky May 26 '12 at 07:01
  • I want to do pagination. give a pageno, and show the records from page_size*pageno to page_size*(pageno+1) – truease.com May 26 '12 at 08:51

1 Answers1

3

Have you looked into the docs?

Unfortunately skip can be (very) costly and requires the server to walk from the beginning of the collection, or index, to get to the offset/skip position before it can start returning the page of data (limit). As the page number increases skip will become slower and more cpu intensive, and possibly IO bound, with larger collections.

Range based paging provides better use of indexes but does not allow you to easily jump to a specific page.

DrColossos
  • 12,656
  • 3
  • 46
  • 67