1

I have been downloading a website using this wget command:

wget \
 --recursive \
 --no-clobber \
 --page-requisites \
 --html-extension \
 --convert-links \
 --restrict-file-names=windows \
 --domains website.org \
 --wait=10 \
 --limit-rate=30K \
    www.domain.org/directory/something.html

I wanted to use the --wait and --limit-rate options to avoid overloading the website. My download was going fine, but 24 hours into it got interrupted. I thought I could resume just by using the --no-clobber option, but although wget is not overwriting the files it has already downloading, it is still waiting 10 seconds after checking each one.

Is there any way to make wget only wait if it actually has to download the file, thus making the checking process go faster until I get caught up to the point where I was? What would be the best way to do this?

Thanks.

Kallaste
  • 737
  • 1
  • 9
  • 18
  • Have you explored the `--continue` option? – devnull Oct 24 '13 at 13:22
  • I think the --continue or -c option is for resuming the download of a file, not a sequence of files. That is, if my download had cut off in the middle of a file, that file would be resumed instead of restarted. But I don't think it can be used to help me here. (I did try, and the result is the same.) Am I missing something? – Kallaste Oct 24 '13 at 14:13

0 Answers0