0

In my testing on rethinkdb, i have inserted 14 millions data into a table.

Sample data inserted -

{"name": "jason" , "id" : "1", "email": "jason@gmail.com", ...}

id was generated by counter of 14 millions

When i tried to filter the table by using this query

r.db("test").table("test_table").filter({"id":"10000"})

This query takes about 13 seconds to return a table row.

Is there any faster ways to filter the table and return a table row that we wanted.

yhtan
  • 71
  • 1
  • 2
  • 7

1 Answers1

1

filter doesn't use an index, it just applies the predicate you give it to every row. You can use get to get an element by primary key (so r.table('test_table').get(10000) in your case), or getAll/between to get by a secondary index.

mlucy
  • 5,249
  • 1
  • 17
  • 21
  • what if the the key 'id' is not primary key, it is an inserted key? – yhtan Mar 08 '16 at 08:15
  • 1
    Then you need an index on it to make querying it fast. You can read more about secondary indexes at https://www.rethinkdb.com/docs/secondary-indexes/ . – mlucy Mar 08 '16 at 19:52