1

I am writing Halide code and I have declared a Buffer< double> input as the input to my Halide function. However, I am not sure if that makes any sense since Halide tutorial#1 says

// Halide does type inference for you. Var objects represent
// 32-bit integers, so the Expr object 'x + y' also represents a
// 32-bit integer, and so 'gradient' defines a 32-bit image, and
// so we got a 32-bit signed integer image out when we call
// 'realize'. Halide types and type-casting rules are equivalent
// to C.

I can run the function without any problems, but I am not sure if there is some typecasting being done to convert my double to float without me even knowing.

1 Answers1

1

Good question! (We need more and better documentation.)

It is perfectly reasonable to use doubles. Much like in C (with those C-style type promotion rules mentioned in the comment you quoted), double <op> float or double <op> int will perform computation in and return results as double.

If, for example, you have your Buffer<double> input, then:

Func f; Var x, y;
f(x,y) = input(x,y)*2;

with infer type type of f as double, as well. You’ll notice this if you check the type of the buffer you get out as the result. As in C, the int constant 2 will be promoted to double before multiplication, and the result will be stored as double. The type of each Func is just given by the inferred type of the right hand side Expr which first defines it.

Types are promoted automatically, never demoted. If you want to constrain the type of the result you can use explicit casts in your expressions.

Does that make sense?

jrk
  • 2,896
  • 1
  • 22
  • 35