I've got LibCURL getting page's source from the web, going through it and picking data out.
Everything is working great bar one page. I had this problem during offline testing while using ifstream and the page source saved to a .html file. basically what's happening i think is the web page renders html + data, the parts I want through js calls (not 100% sure of this) so its not directly rendered in the source.
How I got around this in offline testing was to download the full web page as a offline mode file on Safari, I believe it was called a .webarchive file? This way when I viewed it as source code the html and data was rendered in the source.
I've trolled the internet for an answer but can't seem to find one, can anyone help me here on a setting in curl to download the webpage in its "fullness"?
Here is what options I use currently.
curl_easy_setopt(this->curl, CURLOPT_URL, url);
curl_easy_setopt(this->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(this->curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0");
curl_easy_setopt(this->curl, CURLOPT_COOKIEFILE, "cookies.txt");
curl_easy_setopt(this->curl, CURLOPT_COOKIEJAR, "cookies.txt");
curl_easy_setopt(this->curl, CURLOPT_POSTFIELDS, postData); // if needed
curl_easy_setopt(this->curl, CURLOPT_WRITEFUNCTION, this->WriteCallback);
curl_easy_setopt(this->curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(this->curl);