Using: Postgres 10.2
Ignoring some unrelated columns, I have the table animals with these columns (omitting some unrelated columns):
animalid PK number
location (text)
type (text)
name (text)
data (jsonb) for eg: {"age": 2, "tagid": 11 }
Important points:
- This table is partitioned, into 1000 child tables.
- Each table could be having around 100,000 records and hence a total of ~100 million records
My application tries to fetch an animal, based on the animalid. For eg:
select "animals"."animalid", "animals"."type", "animals"."data", "animals"."location", "animals"."name"
from "animals"
where "animals"."animalid" = 2241
This though, throws the error (when there are many requests):
ERROR: out of shared memory
Hint: You might need to increase max_locks_per_transaction.
I would think that the select queries shouldn't be affected by the locks on these tables? Or could it be that queries outside this application can fill up the memory due to the locks acquired on the partitioned tables, thus also affecting the select queries?
I have an option to use the partitioned table directly (as there is a logic to determine it). Could this help in fixing the issue?
Is it generally a good idea to use a bigger value for the max_locks_per_transaction setting, if there are partitioned tables and queries that update them?
My main area of concern is that I do not quite understand why a select query is being affected here. Can anyone help explain?