3

Currently I'm using Mechanize and the get() method to get each site, and check with content() method each mainpage for something. I have a very fast computer + 10Mbit connection, and still, it took 9 hours to check 11K sites, which is not acceptable, the problem is, the speed of the get() function, which , obviously, needs to get the page,is there any way to make it faster,maybe to disable something,as I only need to main page html to be checked.

Thanks,

brian d foy
  • 129,424
  • 31
  • 207
  • 592
snoofkin
  • 8,725
  • 14
  • 49
  • 86

2 Answers2

14

Make queries in parallel instead of serially. If I needed to do this, I'd fork off a process to grab the page. Something like Parallel::ForkManager, LWP::Parallel::UserAgent or WWW:Curl may help. I tend to favor Mojo::UserAgent.

brian d foy
  • 129,424
  • 31
  • 207
  • 592
  • Perfect!!!! Thanks a lot!. I didnt know you can multithread with Perl, never actually looked for this type of feature when using perl, it really comes handy in this case. – snoofkin Sep 10 '10 at 08:20
  • Well, My mistake. I meant processes. – snoofkin Sep 11 '10 at 07:13
7

Use WWW::Curl (and specifically WWW::Curl::Multi). I'm using it to crawl 100M+ pages per day. The module is a thin binding on top of libcurl, so it feels a bit C-ish, but it's fast and does almost anything libcurl is capable of doing.

I would not recommend using LWP::Parallel::UA, as it's kind of slow and the module itself is not very well thought out. When I started out writing a crawler, I originally thought about forking LWP::Parallel::UA, but I decided against that when I looked into it's internals.

Disclaimer: I'm the current maintainer of the WWW::Curl module.

szbalint
  • 1,643
  • 12
  • 20