i'm voting to close this issue as too broad
, there's 2 separate things you need to figure out here,
1: how to download stuff with curl (of which you can find countless examples of on stackoverflow, and the curl_exec manual has a decent example too)
2: how to parse out links from HTML (of which you can find countless examples of on SO, for example this post has a good list)
try to figure out how to do these 2 things in order, and if you have a specific issue with 1 of these steps that you need help with, try making a new post about that.
once you know how to fetch a page with curl, you can fetch the page, and once you know how to parse HTML with PHP, you can parse out the links, then you can simply fetch all the links you found in the HTML. i suggest you start with the curl_exec() manual, and as for parsing out the links, try using DOMDocument, for example
$domd=@DOMDocument::loadHTML($html);
foreach($html->getElementsByTagName("a") as $link){
echo "found a link: ".$link->getAttribute("href").PHP_EOL;
}