1

I'm not very fluent in C++, to tell the truth.

I have some binary data in memory under a type void* (which means, i think, a pointer to some unrepresentable something/nothing). The data are first taken from the file by fread.

int readfile FILE *file, void **data_return) {
    //some code...

    fread((void *)data, length, 1, file);

    //some code...
}

There's a complex algorithm behind reading the binary data, but I think I don't need to understand it for this task.

char *t = ((char *)loc->mo_data) + string_offset;
return t;

This code reads the void* type (loc->mo_data) as string. Still understandable for me I guess.

The problem is that this data contains russian, spanish, czech and all sort of international characters representable in UTF8.

I'm not even sure, what encoding the "char" represents, probably win1250, because the strings returned are just bad. The function returns Организация instead of Организация. The first string is the UTF8 but represented in ASCII.

The bigger picture: I'm playing with this C++ library which has already been written by someone else - the library exposes just two functions, open file (returns pointer) and get string from this file by string key (returns string). This library is being used in a C# project.

At first, I thought that there might be something wrong with passing UTF8 strings between an C# app and DLL library

    [DllImport("MoReader.dll", CallingConvention = CallingConvention.Cdecl)]
    public static extern IntPtr OpenFile(string path);

    [DllImport("MoReader.dll", CallingConvention = CallingConvention.Cdecl)]
    public static extern string FindString(IntPtr filePointer, string key);

C++ code:

    extern "C" __declspec(dllexport) BinaryFileType* OpenFile(char *filePath);
    extern "C" __declspec(dllexport) char *FindString(BinaryFileType *locText, char *key);

FindString returns the string but in a wrong encoding. And I don't know, how one could read ASCII represented in C# strings which are Unicode as UTF8...I tried some conversion methods but to no avail.

Though I think the problem is in the C++ code, i'd love the char type to be in the UTF8 encoding, I've noticed there's something called a codepage and there are some conversion functions and utf8 stream readers but because of my weak C++ knowledge, I don't really know the solution.

=== UPDATE ===

I've found a property in the Encoding class... When I read the output string like this:

Encoding.UTF8.GetString(Encoding.Default.GetBytes(e))

...the result is right. I'm just getting the bytes from the string via some "Default" encoding and then I read the bytes again with the UTF8. The Default encoding here on my computer is ISO-8859-2 but it would be just plain stupid to rely on this property.

So...the question remains. I still need to know, how to read the void* type with a particular encoding. But at least, I know now that the string is being returned in the default encoding used by Windows.

** === ANSWER === **

Thanks everyone for answers.

As James pointed out, char * are just numbers. So I avoided all encoding troubles by just getting the numbers and not trying to interpret them as a string at all. There was another problem...how to get an array of bytes in C# out of a char* in the C++ library? There is a Marshal.Copy method but I need to know the size of the string. Every char* in C++ must end with a null character '\0'. So in the end, I just read a byte after a byte until i find this null character. The code then looks like this.

IntPtr charPointer = ExternDll.FindString(fileIntPtr, "someString");
List<byte> bytes = new List<byte>();
for (int i=0; ;i++)
{
    byte b = Marshal.ReadByte(charPointer, i);
    if (b == '\0')
        break;

     bytes.Add(b);
}

string theResultStringInTheUTF8 = Encoding.UTF8.ToString(bytes.ToArray());
Mirek
  • 4,013
  • 2
  • 32
  • 47

2 Answers2

2

C++ is agnostic about character encoding. For that matter, if you're getting the characters through some sort of hacky type conversions, any language will be; there's no way for the language to know what the encoding is.

In C++, char is really just a small integer; it's only by convention that it contains some character encoding. But which encoding depends on you. If your input is really UTF-8, then the char's pointed to by a char* will contain UTF-8; if it's something else, then they'll contain something else.

When you output the char's to the screen, C++ just passes them on (at least by default). It's up to the terminal window to decide how to interpret them; i.e. break the sequence into code points, then map each code point to a graphic image. Under Unix (xterm), this is defined by the display font; under Windows, formally at least by the code page (but you can certainly install miscoded fonts which will screw it up). C++ has nothing to do with this. The code page for UTF-8 is 65001; if you set the terminal to use this code page (chcp 5001 on the command line), then output UTF-8, it should work.

James Kanze
  • 150,581
  • 18
  • 184
  • 329
0

.Net can automatically marshal only OEM/ANSI and Unicode/UTF-16 strings. It can't do it for UTF-8, so it's where you are wrong.

You have to manually convert strings from/to UTF-8 with System.Text.Encoding.UTF8

String decodedString = utf8.GetString(encodedBytes);

and pass them to C++ as binary data. Don't forget to append terminating '\0'

blaze
  • 4,326
  • 18
  • 23