In general you could also store the content binary (as-is), and then look where it goes wrong. Say in a programmer's editor like JEdit or NotePad++ that can switch encodings. It might be slipped in HTML comments with native Windows-1251. Maybe stripping HTML comments helps.
For fault tolerant decoding, error reporting, one needs a CharsetDecoder.
CharsetDecoder decoder = StandardCharsets.UTF_8.newDecoder(); // ISO_8859_1
decoder.onMalformedInput(CodingErrorAction.IGNORE);
InputStreamReader reader = new InputStreamReader(my_url.openStream(), decoder);
:
See the javadoc on REPLACE, REPORT. If you want more, the approach is different. You need to read the input, place the bytes in a ByteBuffer, and then
CoderResult result = decoder(byteBuffer, charBuffer, true /*EOF*/);
To pick Cyrillic sequences out of UTF-8 one can check for the validity of UTF-8 sequences:
- 0xxxxxxx
- 110xxxxx 10xxxxxx
- 1110xxxx 10xxxxxx 10xxxxxx
- ...
Hence (untested):
void patchMixOfUtf8AndCp1251(byte[] bytes, StringBuilder sb) {
boolean priorWrongUtf8 = false;
for (int i = 0; i < bytes.length; ++i) {
byte b = bytes[i];
if (b >= 0) {
sb.appendCodePoint((int)b);
priorWrongUtf8 = false;
} else {
int n = highBits(b); // Also # bytes in sequence
boolean isUTF8 = !priorWrongUtf8
&& 1 < n && n <= 6
&& i + n <= bytes.length;
if (isUTF8) {
for (int j = 1; j < i + n; ++j) {
if (highBits(bytes[j]) != 1) {
isUTF8 = false;
break;
}
}
}
if (isUTF8) {
sb.append(new String(bytes, i, i + n, StandardCharsets.UTF_8));
i += n - 1;
} else {
// UTF-8 Continuation char must be Cp1252.
sb.append(new String(bytes, i, i + 1, "Windows-1251"));
}
priorWrongUtf8 = !isUTF8;
}
}
}
private int highBits(byte b) {
int n = 0;
while (n < 8 && ((1 << (7 - n)) & b) != 0) {
++n;
}
return n;
}
As it is unlikely that Cyrillic is immediately concatenated to UTF-8, a state priorWrongUtf8
is used.