I am using PHP and cURL to scrape the html of a single websites pages. Through experimentation I have discovered that my code only works when I specify 10 URLS or less in the $nodes array
(see code sample). I need to scrape around 100 pages at once and save the source code to file. Can this be accomplished using one of cURLS inbuilt functions?
Here is the code i am using at the moment:
function getHTML(){
$nodes = array(
'http://www.example.com/page1.html',
'http://www.example.com/page2.html',
'http://www.example.com/page3.html',
'http://www.example.com/page4.html',
'http://www.example.com/page5.html',
'http://www.example.com/page6.html',
'http://www.example.com/page7.html',
'http://www.example.com/page8.html',
'http://www.example.com/page9.html',
'http://www.example.com/page10.html',
'http://www.example.com/page11.html',
'http://www.example.com/page12.html',
'http://www.example.com/page13.html',
'http://www.example.com/page14.html',
'http://www.example.com/page15.html',
'http://www.example.com/page16.html',
'http://www.example.com/page17.html',
'http://www.example.com/page18.html',
'http://www.example.com/page19.html',
'http://www.example.com/page20.html' ...and so on...
);
$node_count = count($nodes);
$curl_arr = array();
$master = curl_multi_init();
for($i = 0; $i < $node_count; $i++)
{
$url =$nodes[$i];
$curl_arr[$i] = curl_init($url);
curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, true);
curl_multi_add_handle($master, $curl_arr[$i]);
}
do {
curl_multi_exec($master,$running);
} while($running > 0);
echo "results: ";
for($i = 0; $i < $node_count; $i++)
{
$results = curl_multi_getcontent ( $curl_arr[$i] );
echo( $i . "\n" . $results . "\n");
echo 'done';
file_put_contents('SCRAPEDHTML.txt',$results, FILE_APPEND);
}
}
Thanks in advance