I am writing Halide code and I have declared a Buffer< double> input as the input to my Halide function. However, I am not sure if that makes any sense since Halide tutorial#1 says
// Halide does type inference for you. Var objects represent
// 32-bit integers, so the Expr object 'x + y' also represents a
// 32-bit integer, and so 'gradient' defines a 32-bit image, and
// so we got a 32-bit signed integer image out when we call
// 'realize'. Halide types and type-casting rules are equivalent
// to C.
I can run the function without any problems, but I am not sure if there is some typecasting being done to convert my double to float without me even knowing.