0

I'm trying to create a (C++) helper function that transforms a lookup table that's computed in my C++ code into a Halide Func which takes a float as an argument and lerps between samples in the LUT.

The use-case here is that the user has generated a tone curve using a spline with a bunch of control points that we want to apply with Halide. So I sample this spline at a bunch of points, and I want to write a Halide function that lets me linearly interpolate between these samples. Here's my current attempt at this:

#include <Halide.h>

using namespace Halide;
using namespace std;

static Func LutFunc(const vector<float> &lut) {
    Func result;
    Var val;

    // Copy the LUT into a Halide Buffer.
    int lutSize = (int) lut.size();
    Buffer<float> lutbuf(lutSize);
    for(int i = 0; i < lut.size(); i++) {
        lutbuf(i) = lut[i];
    }

    // Compute the offset into the LUT along with the blending factor
    // for the 2 samples we'll take.
    auto y = val * ((float) lutSize - 1);
    auto index = clamp(cast<int>(y), 0, lutSize - 2);
    auto fract = y - cast<float>(index);

    // interpolate between the 2 nearest samples
    result(val) = (lutbuf(index) * (1.0f - fract)) + (lutbuf(index + 1) * fract);

    return result;
}

The problem is that if I then try to include this function into my Halide pipeline, I get this error:

Implicit cast from float32 to int in argument 1 in call to "f" is not allowed. Use an explicit cast.

How can I explain to Halide that the argument to this function should be a float, rather than an int?

Here's a short test program for the above, in case it's helpful:

int main(int argc, char *argv[]) {
    vector<float> lut = { 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 };
    auto f = LutFunc(lut);

    Buffer<float> testData(4);
    testData(0) = 0.05;
    testData(1) = 0.15;
    testData(2) = 0.25;
    testData(3) = 0.35;

    Func testFn;
    Var x;

    testFn(x) = f(testData(x));

    Buffer<float> resultBuf(4);
    testFn.realize(resultBuf);

    for(int i = 0; i < 4; i++) {
        cout << i << " = " << resultBuf(i) << endl;
    }

    return 0;
}

(if there's an easier way to generate these lerping LUT functions (esp if it's able to take advantage of the sampler hardware on GPUs), I'd be interested to know about that too)

j4m3z0r
  • 397
  • 1
  • 3
  • 8

2 Answers2

1

As you already found, the arguments to Halide::Func are always integer types (int32 by default); this is intrinsic to Halide.

Re: a better way to do lerping, Halide has a built-in lerp() helper: see http://halide-lang.org/docs/namespace_halide.html#a55158f5f229510194c425dfae256d530

Steven Johnson
  • 266
  • 1
  • 4
0

Ok, so it seems it's not possible to do this with a Func(): Funcs seem like their intended use-case is to encapsulate a transformation on the source data -- they have a constraint that their arguments be integers, and the intended use case is to specify the pixel coordinates of the input.

I think the way to achieve this is to break it up into two parts: a method to transform my LUT into a Halide Buffer, and a second method that takes an Expr as an argument and returns an Expr, implementing the calculation inline, to which we pass the generated buffer.

Here's a revised version of my function that does what I want:

#include <Halide.h>

using namespace Halide;
using namespace std;

static Expr LutInterp(const Buffer<float> &lutbuf, Expr val) {
    // Interpolate the two values nearest the specified input.
    int lutSize = lutbuf.dim(0).extent();
    auto y = val * ((float) lutSize - 1);
    auto index = clamp(cast<int>(y), 0, lutSize - 2);
    auto fract = y - cast<float>(index);
    return (lutbuf(index) * (1.0f - fract)) + (lutbuf(index + 1) * fract);
}

static Buffer<float> LutToBuff(const vector<float> &lut) {
    int lutSize = (int) lut.size();
    Buffer<float> lutbuf(lutSize);
    for(int i = 0; i < lut.size(); i++) {
        lutbuf(i) = lut[i];
    }
    return lutbuf;
}


int main(int argc, char *argv[]) {
    vector<float> lut = { 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 };
    auto lutbuf = LutToBuff(lut);

    Buffer<float> testData(4);
    testData(0) = 0.05;
    testData(1) = 0.15;
    testData(2) = 0.25;
    testData(3) = 0.35;

    Func testFn;
    Var x;

    testFn(x) = LutInterp(lutbuf, testData(x));

    Buffer<float> resultBuf(4);
    testFn.realize(resultBuf);

    for(int i = 0; i < 4; i++) {
        cout << i << " = " << resultBuf(i) << endl;
    }

    return 0;
}

I'll proceed with this for the time being, and swap it out if I happen upon a better solution in the future.

j4m3z0r
  • 397
  • 1
  • 3
  • 8