0

Usually when I download sites with Httrack I get all the files; images, CSS, JS etc. Today, the program finished downloading in just 2 seconds and only grabs the index.html file with CSS, IMG code etc inside still linking to external. I've already reset my settings back to default but doesn't help. Anyone know how to change it back to function properly?

user3379220
  • 11
  • 1
  • 4

1 Answers1

3

Does the site have a robots.txt and you're honouring that in your settings?

If it does, you can turn it off in "Options/spider/spider: Never" (according to this article)

Mark Fisher
  • 9,838
  • 3
  • 32
  • 38
  • Nope, website does not have a robots.txt. I did make the change as you mentioned just to be sure but still nothing. It's also not site specific; tried different websites including my own. Httrack scans "domain.com" and then "domain.com/" "domain.com/robots" (there isn't any) and then just stops and says it's finished without errors. – user3379220 Nov 23 '14 at 03:02