1

I'm trying to convert a UTF-16 string (obtained from a JSString in spidermonkey 19) into a UTF-8 string. I think the converted string is ok, but for some reason, the conversion routine is adding two extra bytes for every unicode (non-ascii) character. I'm pretty sure I'm doing something wrong, I tried different encodings with no good result. This is what I'm getting now:

// UTF-16 string "áéíóúñ aeiou", this is the string being converted
// (you can find "aeiou" after \x20\x00, where \x61\x00 is "a")
\xC3\x00\xA1\x00\xC3\x00\xA9\x00\xC3\x00\xAD\x00\xC3\x00\xB3\x00\xC3\x00\xBA\x00\xC3\x00\xB1\x00\x20\x00\x61\x00\x65\x00\x69\x00\x6F\x00\x75\x00\x6E\x00

// UTF-8 string, test string, taken from:
// const char* cmp = "áéíóúñ aeiou"
// This is the result I'm looking for.
\xc3\xa1\xc3\xa9\xc3\xad\xc3\xb3\xc3\xba\xc3\xb1 aeiou

// UTF-8 string I'm getting after iconv(utf16, utf8)
\xc3\x83\xc2\xa1\xc3\x83\xc2\xa9\xc3\x83\xc2\xad\xc3\x83\xc2\xb3\xc3\x83\xc2\xba\xc3\x83\xc2\xb1 aeioun

As you can see, there are two extra bytes (\x83\xc2) between every non-ascii character. Anyone knows why is that?

This is my conversion routine:

shared_ptr<char> convertToUTF8(char* utf16string, size_t len) {
    iconv_t cd = iconv_open("UTF-8", "UTF-16LE");
    char* utf8;
    size_t utf8len;

    utf8len = len;
    utf8 = (char *)calloc(utf8len, 1);
    shared_ptr<char> outptr(utf8);

    size_t converted = iconv(cd, &utf16string, &len, &utf8, &utf8len);
    if (converted == (size_t)-1) {
        fprintf(stderr, "iconv failed\n");
        switch (errno) {
            case EILSEQ:
                fprintf(stderr, "Invalid multibyte sequence.\n");
                break;
            case EINVAL:
                fprintf(stderr, "Incomplete multibyte sequence.\n");
                break;
            case E2BIG:
                fprintf(stderr, "No more room (iconv).\n");
                break;
            default:
                fprintf(stderr, "Error: %s.\n", strerror(errno));
                break;
        }
        outptr = NULL;
    }
    iconv_close(cd);
    assert(outptr);
    return outptr;
}

I also tried the solution in this other question, but I got exactly the same result. Any ideas why iconv is adding the extra two bytes? How can I match the result with the manually created utf-8 string?

EDIT: fixed description of the test string

Community
  • 1
  • 1
funkaster
  • 152
  • 1
  • 7

1 Answers1

0

why don't you just use "UTF16" or "UTF-16" instead of "UTF-16LE", from 'man iconv_open', it seems we have 6 different codings for UTF-16,

UTF-16// UTF-16BE// UTF-16LE// UTF16// UTF16BE// UTF16LE//

However, I don't have experience with iconv, but I have converted JSString to gchar* using following function,

gchar* gtweet_jsengine_jsval2gchar(GtweetTwitterClient *self, jsval value)
{
  JSContext *jscontext = NULL;
  JSString *string = NULL;
  GError *error = NULL;
  gunichar2 *utf16_string = NULL;
  gsize utf16_length = 0;
  glong rlen = 0;
  glong wlen = 0;
  gchar *ret = NULL;

  jscontext = self->priv->jscontext;
  JS_BeginRequest(jscontext);
  string = JS_ValueToString(jscontext, value);
  utf16_string = (gunichar2 *) JS_GetStringCharsAndLength(jscontext, string, &utf16_length);
  ret = g_utf16_to_utf8(utf16_string, utf16_length, &rlen, &wlen, &error);
  if(error)
    { 
      g_printerr("%s: %d: %s [rlen: %ld wlen: %ld]\n", g_quark_to_string(error->domain), error->code, error->message, rlen, wlen);
      return NULL;
    }
  JS_EndRequest(jscontext);
  return ret;
}