The site is technically broken. It's sending the page back gzip-encoded whether or not the client has indicated that it can cope with that. This works in all modern web browsers, as they either request the page compressed by default, or they cope with a gzipped response even if they didn't ask for one.
You could go down the route suggested in the answer to the question that Wouter points out, but I'd suggest using PHP's curl library instead. That should be able to decode the requested page transparently.
For example:
$ch = curl_init();
curl_setopt ($ch, CURLOPT_URL, 'http://actualidad.rt.com/actualidad');
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($ch, CURLOPT_ENCODING , 'gzip');
echo curl_exec ($ch);
You should find that this outputs the actual HTML of the web page. This is because of the CURLOPT_ENCODING
option that I've set to "gzip". Because I've set that, curl knows that the response will be gzipped, and will decompress it for you.
I think this is a better solution than unzipping the page manually, as in the future, if the site is fixed so that it does sensibly return a non-gzipped page if the client says that it can't cope with gzip, this code should carry on working.