0

Here is example how to restore postgres dump from s3:

wget -O - 'https://s3.amazonaws.com/database.dump' | pg_restore -h somedomain.us-east-1.rds.amazonaws.com -p 5432 -d databasename -U username

But what is expected behavior if file is too large? Say 1Tb. How wget and pg_restore are synchronized? For example when wget reads faster than pg_restore consumes?

Cherry
  • 31,309
  • 66
  • 224
  • 364
  • 1
    The writing process must (and will, automatically with a blocking write) wait until the pipe has space again to write into, like the reading process must wait until there is something to read. That can actually be a means to pause a process: keep the reading end from reading, e.g. by pressing ctrl-s in a terminal. Pipe buffer sizes are discussed [here](https://unix.stackexchange.com/questions/11946/how-big-is-the-pipe-buffer). – Peter - Reinstate Monica Dec 09 '19 at 16:07
  • The data is streamed and I'm pretty sure this has to do with I/O block sizes or memory page sizes, but I've long wondered the details of this. Good question. – virullius Dec 09 '19 at 16:09
  • 1
    Answered here: https://stackoverflow.com/questions/19122/bash-pipe-handling with this reference: https://en.wikipedia.org/wiki/Pipeline_(Unix)#Implementation (however not very detailed) – virullius Dec 09 '19 at 16:15
  • Does this answer your question? [Bash Pipe Handling](https://stackoverflow.com/questions/19122/bash-pipe-handling) – Romeo Ninov Dec 10 '19 at 07:22

0 Answers0