I've written a scraping application which pulls a large amount of pages from a site and parses them. This works well in Windows and is able to pull them pages fast. However, using mono on Linux, the time needed to pull the connection is really slow. I've found if I write urls to a file I can fire up a wget process to pull the pages in bulk then parse the files, but when needing cookies,other headers and per-page processing before getting the next page, using wget is impractical.
I've done a long search and the closest I've come to the problem is here but that still doesn't offer a solution for linux.
I understand there are different routes, but this is unimportant as wget can pull the files at blistering speed, whereas webclient / httpclient cannot.
What can I do to try to solve this bizarre and unexpected problem?