I am launching this snippet:
>>> from ctypes import *
>>> libc = CDLL("libc.so.6")
>>> libc.printf("%f %f\n", c_float(1.0), c_double(1.0))
0.000000 1.000000
printf
expects a double
for %f
BUT I thought floats got promoted to doubles in variadic functions, as the following C code shows:
#include<stdio.h>
int main()
{
float a = 1.0f;
double b = 1.0;
printf("%f %f\n", a, b);
}
produces the expected 1.000000 1.000000
.
Am I missing something? Is the compiler doing some implicit casting in the C code?
I am using a 64-bits machine.