5

I just made a script to grab links from a website, and in turn saves them into a text file.

Now I'm working on my regexes so it will grab links which contains php?dl= in the url from the text file:

E.g.: www.example.com/site/admin/a_files.php?dl=33931

Its pretty much the address you get when you hover over the dl button on the site. From which you can click to download or "right click save".

I'm just wondering on how to achieve this, having to download the content of the given address which will download a *.txt file. All from the script of course.

Andy Lester
  • 91,102
  • 13
  • 100
  • 152
eraldcoil
  • 55
  • 1
  • 4
  • 1
    What is the question here? You made a script and now want it to only download certain URLs? Are you looking for a regexp? – Konerak Jul 06 '10 at 11:39
  • 1
    I'm trying to figure out how you download the file associated with url. For example, on the website you get click 'dl' icon/button and your browser automatically downloads the file for you. ie: http://www.example.com/site/admin/a_files.php?dl=33931 would download "file1.txt" I'm just wondering how you can download the file in Perl. The regexp part is not a problem. Or have I missed a function that can do all of this with ease haha – eraldcoil Jul 06 '10 at 11:44
  • [Crawling in Perl - A Quick Tutorial](http://www.cs.utk.edu/cs594ipm/perl/crawltut.html) – João Pereira Jul 06 '10 at 11:39

3 Answers3

8

Make WWW::Mechanize your new best friend.

Here's why:

  • It can identify links on a webpage that match a specific regex (/php\?dl=/ in this case)
  • It can follow those links through the follow_link method
  • It can get the targets of those links and save them to file

All this without needing to save your wanted links in an intermediate file! Life's sweet when you have the right tool for the job...


Example

use strict;
use warnings;
use WWW::Mechanize;

my $url  = 'http://www.example.com/';
my $mech = WWW::Mechanize->new();

$mech->get ( $url );

my @linksOfInterest = $mech->find_all_links ( text_regex => qr/php\?dl=/ );

my $fileNumber++;

foreach my $link (@linksOfInterest) {

    $mech->get ( $link, ':contentfile' => "file".($fileNumber++).".txt" );
    $mech->back();
}
Zaid
  • 36,680
  • 16
  • 86
  • 155
5

You can download the file with LWP::UserAgent:

my $ua = LWP::UserAgent->new();  
my $response = $ua->get($url, ':content_file' => 'file.txt');  

Or if you need a filehandle:

open my $fh, '<', $response->content_ref or die $!;
Eugene Yarmash
  • 142,882
  • 41
  • 325
  • 378
0

Old question, but when I'm doing quick scripts, I often use "wget" or "curl" and pipe. This isn't cross-system portable, perhaps, but if I know my system has one or the other of these commands, it's generally good.

For example:

#! /usr/bin/env perl
use strict;
open my $fp, "curl http://www.example.com/ |";
while (<$fp>) {
  print;
}
djconnel
  • 441
  • 2
  • 4