I am currently taking nightly exports of a Postgres 10.5 database, but taking schema only for 2 very large tables that I don't need the data for. I am compressing the export when done, and would really like to use pg_dump's internal compression system moving forward due to it being much faster in my testing.
Here is the workflow that I am currently using:
pg_dump -U postgres -h example.com -T very_large_tbl -T large_tbl db > /tmp/mydbdump
pg_dump -U postgres -h example.com -s -t very_large_tbl -t large_tbl db >> /tmp/mydbdump
bzip2 -zc < /tmp/mydbdump > /tmp/compressed_dbdump.dat.bz2
The workflow requires the following guidelines:
- Tables
very_large_tbl
andlarge_tbl
need to export with no data (schema only). - All other tables need schema + data exported.
- The end result must be compressed
My current workflow follows these guidelines without issue. However in testing on other databases using pg_dump -F c
has MUCH faster overall process timings versus exporting and then compressing afterwards.
From what I can tell, you cannot use -s
in pg_dump
to "pick and choose" which tables get schema-only. It's all or nothing. Am I wrong here?
My question is this:
Is it possible to export "some" tables as schema only while using the -F c
option of pg_dump? Or perhaps is it possible to append to an existing dump that was compressed with -F c
?
Even if the process has to take place in 2 parts, that's fine and I'm open to "out of the box" workarounds. Just wondering if it's at all possible.