1

I was able to download images from this web page. But how do you get images from the entire website to download?

$folder = 'c:\drawings'
$exists = Test-Path -Path $folder
if (!$exists) { $null = New-Item -Path $folder -ItemType Directory }
explorer $folder
ForEach-Object {
  $url = "https://jennifer-aniston.org/gallery/"
  $iwr = Invoke-WebRequest -Uri $url -Method GET  
  $images = ($iwr).Images | select src
  $wc = New-Object System.Net.WebClient
  $images | foreach { $wc.DownloadFile( $url + $_.src, "$folder\" + [System.IO.Path]::GetFileName($_.src) ) }
}
  • 3
    You basically need to write a web crawler. From a start page, download all images, get all _internal_ links and repeat the process recursively. Make sure you don't enter infinite cycles as pages might link back to pages already visited. – zett42 Sep 25 '22 at 18:36
  • 2
    Note: If the page depends heavily on JS to load content, then this might not be possible using `Invoke-WebRequest`. E. g. some pages delay-load images when you scroll. – zett42 Sep 25 '22 at 19:02
  • maybe https://www.httrack.com/ can help you with that task. once you have the website copied you could run a script to move the files from subfolders into one. – Guenther Schmitz Sep 26 '22 at 12:54

0 Answers0