0

For postgres,

I used 'pg_dump' to dump a db, then use 'psql' to restore the db at the other sever. I found there are huge volume of WAL logs created at pg_xlog/ which consumed a lot of disk space.

Is there any way postgres provides to compress these WAL logs automatically?

I saw a 'archive_command' setup in postgres' manual. Is it the correct direction?

Yang

Yang
  • 1,285
  • 1
  • 10
  • 14

1 Answers1

0

Yes, archive_command is the directive you're looking for. From the manual:

archive_command (string)

The shell command to execute to archive a completed WAL file segment. Any %p in the string is replaced by the path name of the file to archive, and any %f is replaced by only the file name. (The path name is relative to the working directory of the server, i.e., the cluster's data directory.) Use %% to embed an actual % character in the command. It is important for the command to return a zero exit status only if it succeeds. For more information see Section 24.3.1.

This parameter can only be set in the postgresql.conf file or on the server command line. It is ignored unless archive_mode was enabled at server start. If archive_command is an empty string (the default) while archive_mode is enabled, WAL archiving is temporarily disabled, but the server continues to accumulate WAL segment files in the expectation that a command will soon be provided. Setting archive_command to a command that does nothing but return true, e.g. /bin/true (REM on Windows), effectively disables archiving, but also breaks the chain of WAL files needed for archive recovery, so it should only be used in unusual circumstances.

The Postgres wiki has one example:

# Enable WAL archiving on the primary to an archive directory accessible from
# the standby. If wal_keep_segments is a high enough number to retain the WAL
# segments required for the standby server, this is not necessary.
archive_mode    = on
archive_command = 'cp %p /path_to/archive/%f'