0

I am a little bit intrigued by the way some glib functions such as "g_ascii_dtostr" (and the GKeyFile functions using doubles) work.

Consider this line:

gchar buf[30];
g_message("Double: %f, as String: %s", 0.2, g_ascii_dtostr(buf, 30, 0.2));

Which outputs

Double: 0.200000, as String: 0.20000000000000001

(The weird conversion only happens when I set the buffer size high enough though)

Similar things happen when I (for example) store the double "1.9" in a GKeyFile, but in the resulting file it is saved as "1.8999999999999999". Apparently the conversion back through "g_ascii_strtod" is supposed to be lossless, but it still bothers me why this weirdness happens in the first place. Also this makes my config key-value files pretty ugly..

I think I have read somewhere once that an intermediate "long double" type is used, but this still wouldn't clarify why the converted value is "dirty", because e.g. a conversion from int to double for doesn't have any similar effects I think.

Ancurio
  • 1,698
  • 1
  • 17
  • 32
  • 3
    http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html – Mysticial Dec 31 '11 at 09:05
  • 1
    A pointer to the most frequent floating-point-related answer on stackoverflow (or anywhere else?): http://stackoverflow.com/a/6033242/12711 – Michael Burr Dec 31 '11 at 09:08
  • @Mysticial Thank you, I will read that paper later, but for now, basically, when I store the value "1.9" it could never have been stored precisely in the first place because in binary, it has infinite digits. Is that correct? – Ancurio Dec 31 '11 at 09:32
  • If you change the `%f` to `%.18f` you'll see a similar output from the `0.2` literal too. – caf Dec 31 '11 at 10:23
  • http://player.vimeo.com/video/7516539#t=7m26s is short and fun. – Yasushi Shoji Aug 19 '12 at 04:05

1 Answers1

1

From the documentation:

"This functions generates enough precision that converting the string back using g_ascii_strtod() gives the same machine-number (on machines with IEEE compatible 64bit doubles)."

As caf noted in his comment, the %f specifier truncates after six decimal digits, so g_ascii_strtod() gives you the more accurate value. You can use g_ascii_formatd() if you want this truncation.

Michael C.
  • 319
  • 1
  • 10