3

We are at our company planning to make our application Unicode-aware, and we are analyzing what problems we are going to encounter.

Particularly, our application will for example rely heavily on lengths of strings and we would like to use wchar_t as base character class.

The problem arises when dealing with characters that must be stored in 2 units of 16 bits in UTF-16, namely characters above U+10000.

Simple example:

I have the UTF-8 string "蟂" (Unicode character U+87C2, in UTF-8: E8 9F 82)

So, I set the following code:

const unsigned char my_utf8_string[] = { 0xe8, 0x9f, 0x82, 0x00 };

// compute size of wchar_t buffer.
int nb_chars = ::MultiByteToWideChar(CP_UTF8,                                  // input is UTF8
                                     0,                                        // no flags
                                     reinterpret_cast<char *>(my_utf8_string), // input string (no worries about signedness)
                                     -1,                                       // input is zero-terminated
                                     NULL,                                     // no output this time
                                     0);                                       // need the necessary buffer size

// allocate
wchar_t *my_utf16_string = new wchar_t[nb_chars];

// convert
nb_chars = ::MultiByteToWideChar(CP_UTF8,
                                 0,
                                 reinterpret_cast<char *>(my_utf8_string),
                                 -1,
                                 my_widechar_string, // output buffer
                                 nb_chars);          // allocated size

Okay, this works, it allocates twice 16 bits, and my buffer of wchar_t contains { 0x87c2, 0x0000 }. If I store it inside a std::wstring and compute the size, I get 1.

Now, let us take character (U+104A2) as input, in UTF-8: F0 90 92 A2.

This time, it allocates space for three wchar_t and std::wstring::size returns 2 even though I consider that I only have one character.

This is problematic. Let us assume that we receive data in UTF-8. We can count Unicode characters simply by not counting bytes that equate to 10xxxxxx. We would like to import that data in an array of wchar_t to work with it. If we just allocate the number of characters plus one, it might be safe... until some person uses a character above U+FFFF. And then our buffer will be too short and our application will crash.

So, with the same string, encoded in different ways, functions that count characters in a string will return different values?

How are applications that work with Unicode strings designed in order to avoid this sort of annoyances?

Thank you for your replies.

Benoit
  • 76,634
  • 23
  • 210
  • 236
  • I find it amusing that you are afraid of a problem that UTF16 has, when you are apperently already using UTF8 which has the exact same problem. Store `UTF-8: F0 90 92 A2` in a `std::string` and it's `length` member will return 4. – Mooing Duck Sep 11 '12 at 22:32
  • Related: http://www.joelonsoftware.com/articles/Unicode.html – Mooing Duck Sep 11 '12 at 22:33

2 Answers2

7

You have to accept that std::wstring::size does not give the number of characters. Instead, it gives you the number of code units. If you have 16-bit code units, it determines how many of them you have in the string. Computing the number of Unicode characters would require looping over the string. It won't be annoying anymore once you accept it.

As for counting characters in UTF-8: don't. Instead, the code you posted is fine: calling MultiByteToWideChar once will tell you how many code units you need, and you then allocate the right number - whether it's for BMP characters or supplementary planes. If you absolutely want to write your own counting routines, have two of them: one that counts characters, and one that counts 16-bit code units. If the lead byte is 11110xxx, you need to count two code units.

Martin v. Löwis
  • 124,830
  • 17
  • 198
  • 235
  • Do you mean when the lead byte is `11010xxx` (D8->DB)? – Benoit Dec 07 '10 at 13:11
  • No, I really meant 11110xxx (F0). 11010xxx is not allowed in UTF-8. – Martin v. Löwis Dec 07 '10 at 13:17
  • Sorry, I believed you were talking about UTF-16 strings where 32-bit characters have first and third byte in the range d8-df. But then, how can I explain to my customers that “no, in this database field in which you can only put one character, you cannot put because my fixed buffer sizes can't handle it? – Benoit Dec 07 '10 at 13:47
  • 6
    You shouldn't have size constraints on fields shorter than what your customers want to put into the fields. In any case, I believe many database systems will count bytes when constraining the size of CHAR strings, in addition, they will often use UTF-8, which is variable-sized. As for potential user confusion: you can also get confused users if you count characters rather than code units. If you have combining characters, they will count as separate characters, but render as combined glyphs. So if you the the user that "Lowis" fits, but "Löwis" does not, they would be equally confused. – Martin v. Löwis Dec 07 '10 at 15:15
3

I suggest you read the following FAQ from the official Unicode web site: http://www.unicode.org/faq//utf_bom.html

Basically, it is important to distinguish between code units, code points and characters.

Nemanja Trifunovic
  • 24,346
  • 3
  • 50
  • 88