-1

I am having problems with converting UTF-8 to Unicode.

Below is the code:

int charset_convert( char * string, char * to_string,char* charset_from, char* charset_to)
{
    char *from_buf, *to_buf, *pointer;
    size_t inbytesleft, outbytesleft, ret;
    size_t TotalLen;
    iconv_t cd;

    if (!charset_from || !charset_to || !string) /* sanity check */
        return -1;

    if (strlen(string) < 1)
        return 0; /* we are done, nothing to convert */

    cd = iconv_open(charset_to, charset_from);
    /* Did I succeed in getting a conversion descriptor ? */
    if (cd == (iconv_t)(-1)) {
        /* I guess not */
        printf("Failed to convert string from %s to %s ",
              charset_from, charset_to);
        return -1;
    }
    from_buf = string;
    inbytesleft = strlen(string);
    /* allocate max sized buffer, 
       assuming target encoding may be 4 byte unicode */
    outbytesleft = inbytesleft *4 ;
    pointer = to_buf = (char *)malloc(outbytesleft);
    memset(to_buf,0,outbytesleft);
    memset(pointer,0,outbytesleft);

        ret = iconv(cd, &from_buf, &inbytesleft, &pointer, &outbytesleft);ing
    memcpy(to_string,to_buf,(pointer-to_buf);
}

main():

int main()
{    
    char  UTF []= {'A', 'B'};
    char  Unicode[1024]= {0};
    char* ptr;
    int x=0;
    iconv_t cd;

    charset_convert(UTF,Unicode,"UTF-8","UNICODE");

    ptr = Unicode;

    while(*ptr != '\0')
    {   
        printf("Unicode %x \n",*ptr);
        ptr++;
    }
    return 0;
}

It should give A and B but i am getting:

ffffffff
fffffffe
41 

Thanks, Sandeep

soc
  • 27,983
  • 20
  • 111
  • 215
sandeep
  • 11
  • 1
  • 1
  • 4
    Could you fix your question a little bit? It is quite unreadable as is. Additionally "UTF-8 to Unicode conversion" doesn't make sense. Unicode is a specification and UTF-8 is a "format" of storing data for usage in Unicode-related fields. – soc Jan 16 '11 at 11:35
  • Did you try to understand what it does or did you just copy'n'pasted it from somewhere (judging from the line numbers all over the place)? – soc Jan 16 '11 at 11:40
  • Thanks Soc, I went through the below mentioned link and was trying to understand if Unicode Binary representation and correspoding UTF-8 are different. – sandeep Jan 16 '11 at 11:45

4 Answers4

2

It looks like you are getting UTF-16 out in a little endian format:

ff fe 41 00 ...

Which is U+FEFF (ZWNBSP aka byte order mark), U+0041 (latin capital letter A), ...

You then stop printing because your while loop has terminated on the first null byte. The following bytes should be: 42 00.

You should either return a length from your function or make sure that the output is terminated with a null character (U+0000) and loop until you find this.

CB Bailey
  • 755,051
  • 104
  • 632
  • 656
  • you are correct, of course, but I think there is a deeper conceptual problem in the OP's question that needs to be cleared before your answer makes sense. In any case, upvote. – Dervin Thunk Jan 16 '11 at 12:25
0

UTF-8 is Unicode.

You do not need to covert unless you need some other type of Unicode encoding like UTF-16, or UTF-32

Artyom
  • 31,019
  • 21
  • 127
  • 215
  • @BlackBear: I think he knows that and that isn't his point. No reason to downvote. – soc Jan 16 '11 at 11:47
  • @Artyom: I did't downvote you. I downvote only rude or veery wrong answers. – BlackBear Jan 16 '11 at 11:48
  • @Philipp: That's like saying "1" has nothing to do with number theory. Most people here understand what the relationship between UTF-8 and Unicode is, and those you don't are those who don't care. – soc Jan 16 '11 at 12:16
  • 2
    @soc: Saying that "1 is number theory" would be equally wrong. The relationship between UTF-8 and Unicode is important for this question, and apparently the OP has a misconception about it. Adding another wrong statement won't help. (BTW, I too didn't downvote.) – Philipp Jan 16 '11 at 12:22
0

UTF is not Unicode. UTF is an encoding of the integers in the Unicode standard. The question, as is, makes no sense. If you mean you want to convert from (any) UTF to the unicode code point (i.e. the integer that stands for an assigned code point, roughly a character), then you need to do a bit of reading, but it involves bit-shifting for the values of the 1, 2, 3 or 4 bytes in UTF-8 byte sequence (see Wikipedia, while Markus Kuhn's text is also excellent)

Dervin Thunk
  • 19,515
  • 28
  • 127
  • 217
0

Unless I am missing something as nobody has pointed it out yet, "UNICODE" isn't a valid encoding name in libiconv as it is the name of a family of encodings.

http://www.gnu.org/software/libiconv/

(edit) Actually iconv -l shows UNICODE as a listed entry but no details, in the source code its listed in the notes as an alias for UNICODE-LITTLE but in the subnotes it mentions:

 * UNICODE (big endian), UNICODEFEFF (little endian)
   We DON'T implement these because they are stupid and not standardized.

In the aliases header files UNICODELITTLE (no hyphen) resolves as follows:

lib/aliases.gperf:UNICODELITTLE, ei_ucs2le

i.e. UCS2-LE (UTF-16 Little Endian), which should match Windows internal "Unicode" encoding.

http://en.wikipedia.org/wiki/UTF-16/UCS-2

However you are clearly recommended to explicitly specify UCS2-LE or UCS2-BE unless the first bytes are a Byte Order Mark (BOM) value 0xfeff to indicate byte order scheme.

=> You are seeing the BOM as the first bytes of the output because that is what the "UNICODE" encoding name means, it means UCS2 with a header indicating the byte order scheme.

Steve-o
  • 12,678
  • 2
  • 41
  • 60