0

I am currently taking nightly exports of a Postgres 10.5 database, but taking schema only for 2 very large tables that I don't need the data for. I am compressing the export when done, and would really like to use pg_dump's internal compression system moving forward due to it being much faster in my testing.

Here is the workflow that I am currently using:

pg_dump -U postgres -h example.com -T very_large_tbl -T large_tbl db > /tmp/mydbdump

pg_dump -U postgres -h example.com -s -t very_large_tbl -t large_tbl db >> /tmp/mydbdump

bzip2 -zc < /tmp/mydbdump > /tmp/compressed_dbdump.dat.bz2

The workflow requires the following guidelines:

  1. Tables very_large_tbl and large_tbl need to export with no data (schema only).
  2. All other tables need schema + data exported.
  3. The end result must be compressed

My current workflow follows these guidelines without issue. However in testing on other databases using pg_dump -F c has MUCH faster overall process timings versus exporting and then compressing afterwards.

From what I can tell, you cannot use -s in pg_dump to "pick and choose" which tables get schema-only. It's all or nothing. Am I wrong here?

My question is this:

Is it possible to export "some" tables as schema only while using the -F c option of pg_dump? Or perhaps is it possible to append to an existing dump that was compressed with -F c?

Even if the process has to take place in 2 parts, that's fine and I'm open to "out of the box" workarounds. Just wondering if it's at all possible.

emmdee
  • 2,187
  • 12
  • 36
  • 60

1 Answers1

1

Well like always I just had to go back to the documentation and study it closely.

pg_dump has a --exclude-table-data option.

So to exclude data for a table named my_table it's as simple as --exclude-table-data=my_table

This will dump the create info for the table, but will exclude any data within it.

emmdee
  • 2,187
  • 12
  • 36
  • 60