8

Getting problem when taking backup on database contains around 50 schema with each schema having around 100 tables.

pg_dump throwing below error suggesting that to increase max_locks_per_transaction.

pg_dump: WARNING:  out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: The command was: SELECT tableoid, oid, prsname, prsnamespace, prsstart::oid, prstoken::oid, prsend::oid, prsheadline::oid, prslextype::oid FROM pg_ts_parser

An updated of max_locks_per_transaction to 256 in postgresql.conf did not solve the problem.

Are there any possibilities which can cause this problem?

Edited:(07 May, 2016)

Postgresql version = 9.1

Operating system = Ubuntu 14.04.2 LTS

shared_buffers in postgresql.conf = 2GB

Edited:(09 May, 2016)

My postgres.conf

maintenance_work_mem = 640MB
wal_buffers = 64MB
shared_buffers = 2GB
max_connections = 100
max_locks_per_transaction=10000
pgollangi
  • 848
  • 3
  • 13
  • 28

4 Answers4

8

You might need to increase max_locks_per_transaction even further. Check the documentation for details on what this parameter means. If in doubt, check how many tables you have and set max_locks_per_transaction to at least that much, then you should be OK.

Peter Eisentraut
  • 35,221
  • 12
  • 85
  • 90
8

I solved this problem by taking backup for all schema individually as size of database (be it no.of schemas or no.of tables) increases it's hard to take backup using pg_dump.

I have done following modification to the script to take schema-wise backup:

  1. Before running pg_dump, list all database schemas into a file. So that we can iterate all schemas and take backup for a schema.

    Here is the command to list all schema to a file

    psql <db_name> -o <output_file> < <sql_to_list_schema>

    Here sql_to_list_schema contains

    SELECT n.nspname FROM pg_catalog.pg_namespace n WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema';

  2. Now read all lines of output_file and take backup of that schema

    pg_dump <db_name> -f <backup_file> -i -x -O -R -n <schema_name_read_from_file>

Nicholas Sizer
  • 3,490
  • 3
  • 26
  • 29
pgollangi
  • 848
  • 3
  • 13
  • 28
  • The [link](http://www.wisdombay.com/articles/article000013.htm) you provided for the script, doesn't work anymore... – Roshana Pitigala Mar 28 '18 at 16:34
  • You might want to use some extra params with to the `psql` command. This will easier reading of the names to a variable like: `--no-align --tuples-only --record-separator=" "`. – Nux Oct 10 '18 at 14:59
-1

Try this

shared_buffers = 512GB
max_locks_per_transaction=10000

Please note that every env is different

Arseniy-II
  • 8,323
  • 5
  • 24
  • 49
zdarova
  • 97
  • 5
  • I get this error: HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 563431931904 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections. – Daniel Bentes Oct 20 '21 at 18:18
-3

I tried using the below without tweaking any buffer size and it worked.

psql -U <db_user_name> -d <db_name> -f <file_name>.sql