I used System::Net::WebClient::DownloadString
to get HTML code of some webpages.
But, it doesn't work for some specific pages.
For this reason, I used WebBrowser2 for these pages. However, this way has some problems since it really loads pages.
I want to do these:
if a page contains msgbox, I want to ignore it. (in this case, I don't want to see msg box and open other page. I don't need to get this page's HTML code.)
Some pages are not a webpage but file. If I open this, it calls download control. It really iritates me. I tried to filter urls that aren't webpages, but I failed. I want to Ignore pages that are not real webpage.
What should I do?