-1

I have some links such as:

  • http:/foo.com/1/1/1/1.jpg
  • http:/foo.com/1/2/2/2.jpg
  • http:/foo.com/1/3/3/3.jpg
  • ...

How I can I download all files and directories with wget?

jscott
  • 24,484
  • 8
  • 79
  • 100
akoori
  • 17
  • 2

2 Answers2

3

HTTP doesn't really expose a filesystem, so wget typically can't just grab the whole directory etc. It can only work with resources it knows about. It will try to grab each URL that is an href or src of the appropriate elements in the page/site you point it at, but if the files aren't linked to from the page, wget doesn't even know about them, so it won't try to get them.

Translation: If you want to get all that stuff, have it linked from somewhere in the page/site. Or, use FTP, which is far better suited for the job.

cHao
  • 473
  • 1
  • 3
  • 10
2

If you have a list of urls in a file named links.txt:

for url in $(cat links.txt); do wget $url; done

With aria2:

aria2c -i links.txt

If what you have is a html file with lots of href and other tags..., you could us regexp to parse them and finally get a clean list.

Pere
  • 141
  • 4