-2

I am trying to crawl a Russian website in Linux, but the output seems to be junk characters. The website is encoded in UTF-8 and while reading I am setting the encoding as UTF-8. However this doesn't seem to solve the problem. What should I do to read it ?

public class Crawl {   
    @SuppressWarnings("unused")
    public static void main(String[] args) {

    URL my_url = new URL("http://www.fmsmoscow.ru/docs/migration_registration/registration.html");
    BufferedReader br = new BufferedReader(new InputStreamReader(my_url.openStream(),"UTF-8"));
    while (null != (strTemp = br.readLine())){
    System.out.println(strTemp);
        }
    }   
}

Above is the code for the same. I am exporting the code as jar and adding it in the Linux server. Then I am executing it to get the output in the Linux console.

Jigar
  • 1
  • 2
  • There are nowhere near enough details in this question to be able to help in any way. – deceze Mar 18 '15 at 14:33
  • Tell me what details you require and i'll give them. – Jigar Mar 19 '15 at 06:20
  • 1) "website is encoded in UTF-8" - how have you confirmed that? 2) "while reading I am setting the encoding as UTF-8" - what does that mean exactly? Code please. 3) "the output seems to be junk characters" - where how when what are you outputting to? – deceze Mar 19 '15 at 07:32
  • I know that the website is encoded in UTF-8 on viewing its page source. URL my_url = new URL("http://www.fmsmoscow.ru/docs/migration_registration/registration.html");BufferedReader br = new BufferedReader(new InputStreamReader(my_url.openStream(),"Cp1251")); while (null != (strTemp = br.readLine())){ System.out.println(strTemp);} – Jigar Mar 19 '15 at 07:45
  • 1) Is this Java we're talking about? 2) Why are you setting the encoding to `Cp1251` explicitly even though the site is supposedly UTF-8 encoded? 3) Please [edit your question](http://stackoverflow.com/posts/29124804/edit) and add code to it (and the appropriate tags for whatever language you're using). – deceze Mar 19 '15 at 07:51
  • That page is *not* encoded in UTF-8, the HTTP headers specify `windows-1251`. – deceze Mar 19 '15 at 08:09
  • Yes, it is just a demo page, but the one I need to crawl is in UTF-8. I can't give that URL for client confidentiality reasons. – Jigar Mar 19 '15 at 08:12

1 Answers1

0

In general you could also store the content binary (as-is), and then look where it goes wrong. Say in a programmer's editor like JEdit or NotePad++ that can switch encodings. It might be slipped in HTML comments with native Windows-1251. Maybe stripping HTML comments helps.

For fault tolerant decoding, error reporting, one needs a CharsetDecoder.

CharsetDecoder decoder = StandardCharsets.UTF_8.newDecoder(); // ISO_8859_1
decoder.onMalformedInput(CodingErrorAction.IGNORE);
InputStreamReader reader = new InputStreamReader(my_url.openStream(), decoder);

: See the javadoc on REPLACE, REPORT. If you want more, the approach is different. You need to read the input, place the bytes in a ByteBuffer, and then

CoderResult result = decoder(byteBuffer, charBuffer, true /*EOF*/);

To pick Cyrillic sequences out of UTF-8 one can check for the validity of UTF-8 sequences:

  • 0xxxxxxx
  • 110xxxxx 10xxxxxx
  • 1110xxxx 10xxxxxx 10xxxxxx
  • ...

Hence (untested):

void patchMixOfUtf8AndCp1251(byte[] bytes, StringBuilder sb) {
    boolean priorWrongUtf8 = false;
    for (int i = 0; i < bytes.length; ++i) {
        byte b = bytes[i];
        if (b >= 0) {
            sb.appendCodePoint((int)b);
            priorWrongUtf8 = false;
        } else {
            int n = highBits(b); // Also # bytes in sequence
            boolean isUTF8 = !priorWrongUtf8
                    && 1 < n && n <= 6
                    && i + n <= bytes.length;
            if (isUTF8) {
                for (int j = 1; j < i + n; ++j) {
                     if (highBits(bytes[j]) != 1) {
                         isUTF8 = false;
                         break;
                     }
                }
            }
            if (isUTF8) {
                sb.append(new String(bytes, i, i + n, StandardCharsets.UTF_8));
                i += n - 1;
            } else {
                // UTF-8 Continuation char must be Cp1252.
                sb.append(new String(bytes, i, i + 1, "Windows-1251"));
            }
            priorWrongUtf8 = !isUTF8;
        }
    }
}

private int highBits(byte b) {
    int n = 0;
    while (n < 8 && ((1 << (7 - n)) & b) != 0) {
        ++n;
    }
    return n;
}

As it is unlikely that Cyrillic is immediately concatenated to UTF-8, a state priorWrongUtf8 is used.

Joop Eggen
  • 107,315
  • 7
  • 83
  • 138