0

Libcurl in my app seems to have trouble with non existing domains, as it stays a minimum 7-20s on "NXDOMAIN" requests (which seems to match CURLOPT_CONNECTTIMEOUT).

here is the pmp (poore man's profiler) output:

   2585 __GI___poll,Curl_poll,curl_multi_wait,curl_easy_perform,getweb,athread,start_thread,clone,??
   1281 __GI___poll,Curl_poll,curl_multi_wait,curl_easy_perform,getweb,getweb,athread,start_thread,clone,??
    100 nanosleep,__sleep,athread,start_thread,clone,??
    ...

the curl command doesn't seem to have this issue. It finishes the same request under a sec.

The question could be related to this one, as it seems I've solved one problem and ended up with another, can't tell if it was after the last ubuntu update or before.

Here is the libcurl code in my project:

PAGE_TIMEOUT=20;
curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT, (PAGE_TIMEOUT-PAGE_TIMEOUT%3)/3); //3 times less
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, true);
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, 20);
curl_easy_setopt(curl, CURLOPT_TIMEOUT, PAGE_TIMEOUT);
curl_easy_setopt(curl, CURLOPT_URL, argv);
curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_to_string);
curl_easy_setopt(curl, CURLOPT_WRITEHEADER, &header);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &response);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, false);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, false);
curl_easy_setopt(curl, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
dns_index=DNS_SERVER_I;
pthread_mutex_lock(&running_mutex);
    if(DNS_SERVER_I>DNS_SERVERS.size())
    {
        DNS_SERVER_I=1;
    }else
    {
        DNS_SERVER_I++;
    }
pthread_mutex_unlock(&running_mutex);
}
string dns_servers_string=DNS_SERVERS.at(dns_index%DNS_SERVERS.size())+","+DNS_SERVERS.at((dns_index+1)%DNS_SERVERS.size())+","+DNS_SERVERS.at((dns_index+2)%DNS_SERVERS.size());
curl_easy_setopt(curl, CURLOPT_DNS_SERVERS, &dns_servers_string[0]);    
curl_easy_setopt(curl, CURLOPT_DNS_USE_GLOBAL_CACHE,false);

struct curl_slist *slist=NULL;
slist = curl_slist_append(slist, "ACCEPT: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,*/*;q=0.5");
slist = curl_slist_append(slist, "ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.7");
slist = curl_slist_append(slist, "ACCEPT_ENCODING: gzip,deflate");
slist = curl_slist_append(slist, "ACCEPT_LANGUAGE: en-gb,en;q=0.5");
slist = curl_slist_append(slist, "CONNECTION: keep-alive");
slist = curl_slist_append(slist, "KEEP_ALIVE: 300");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, slist);
string useragent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.1.16) Gecko/20080702 Firefox/2.0.0.16";
curl_easy_setopt(curl, CURLOPT_USERAGENT, useragent.c_str());   

sm=curl_easy_perform(curl);

I've built libcurl v7.30 from source (with c-ares and no ipv6) and installed over ubuntu 12.10 repository version (not sure if overridden, I've used make install).

After last automatic update I've been getting an error but don't know if it's related.

Community
  • 1
  • 1
Stefan Rogin
  • 1,499
  • 3
  • 25
  • 41
  • 1
    I can't tell why non-existent domains would cause a delay. They certainly shouldn't. What is the error code ultimately reported by libcurl after the timeout? Then, since you are setting custom DNS servers, perhaps you should capture the DNS traffic to verify that the DNS request and responses contain what you think they do. Finally, your headers are very strange. For example, `ACCEPT_ENCODING` is not a standard header. That `_` should be a `-` (and furthermore, headers, although they are actually case-insensitive, are conventionally never represented with all caps). Use `Accept-Encoding`. – Celada Apr 27 '13 at 20:37
  • curl returned time-out, from dns. It seems that from my list of >20 fully tested DNS's, I've remained with only 6 operational in just a few days. I've tested for blacklisting, but they seem to be offline when ping-ing from other places. Any ideea where I could get a list of reliable public dns's? Anyway thanks for the tip. – Stefan Rogin Apr 27 '13 at 21:40

0 Answers0