2

We are evaluating PostgreSQL 11.1 for our production. Having a system with 4251 updates per second, ~1000 deletes per second and ~3221 inserts per second and 1 billion transactions per day, we face a challenge where PostgreSQL does not reuse its (delete/update) space, and tables constantly increase in size.

We configured aggressive Autovacuum settings to avoid the wraparound situation. also tried adding periodic execution of vacuum analyze and vacuum – and still there is no space reuse. (Only vacuum full or pg_repack release space to operating system – but this is not a reuse.)

Following are our vacuum settings:

autovacuum                          | on
vacuum_cost_limit                   | 6000
autovacuum_analyze_threshold        | 50
autovacuum_vacuum_threshold         | 50
autovacuum_vacuum_cost_delay        | 5
autovacuum_max_workers              | 32
autovacuum_freeze_max_age           | 2000000
autovacuum_multixact_freeze_max_age | 2000000
vacuum_freeze_table_age             | 20000
vacuum_multixact_freeze_table_age   | 20000
vacuum_cost_page_dirty              | 20
vacuum_freeze_min_age               | 10000
vacuum_multixact_freeze_min_age     | 10000
log_autovacuum_min_duration         | 1000
autovacuum_naptime                  | 10
autovacuum_analyze_scale_factor     | 0
autovacuum_vacuum_scale_factor      | 0
vacuum_cleanup_index_scale_factor   | 0
vacuum_cost_delay                   | 0
vacuum_defer_cleanup_age            | 0
autovacuum_vacuum_cost_limit        | -1
autovacuum_work_mem                 | -1
Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263

1 Answers1

2

Your requirements are particularly hard for PostgreSQL.

  • You should set autovacuum_vacuum_cost_delay to 0 for that table.

  • Reset autovacuum_max_workers and autovacuum_naptime back to their default values.

  • Reset autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor to their default values or slightly lower values.

Your problem is not that autovacuum does not run often enough, the problem is rather that it is too slow to keep up.

Even with that you might only be able to handle this workload with HOT updates:

  • Make sure that the attributes that are updated a lot are not part of any index.

  • Create the table with a fillfactor below 100, say 70.

HOT update often avoids the need for VACUUM and the need to update indexes.

Check the n_tup_hot_upd column of pg_stat_us-er_tables to see if it works.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
  • Hi Laurenz, thanks much for the answer. does it make sense to cancel the "scale_factors" and use "threshold" instead when table rows grow fast and scale are percentage of total rows and "threshold" is fixed number? thanks Aliza. – Aliza Abulafia Mar 04 '19 at 13:39
  • That is indeed a useful option. – Laurenz Albe Mar 04 '19 at 13:42