1

Linear interpolation between two values is rather simple:

def lerp(v, d):
    return v[0] * (1 - d) + v[1] * d

print lerp(np.array([3, 5]), 0.75)
> 4.5

Let generalize it to arbitrary tensors of shape (2, 2, …), i.e.:

def lerp(v, d):
    assert len(v.shape) >= 1 and d.shape == (len(v.shape),)
    if len(v.shape) == 1:
        assert v.shape[0] == 2
        dd = np.array([1 - d[0], d[0]], dtype=v.dtype)
        return sum(v * dd)
    else:
        v = [lerp(submatrix, d[1:]) for submatrix in v]
        return lerp(np.array(v), d[:1])

assert lerp(np.array([3.0, 4.0]), np.array([0.75])) == 3.75
assert lerp(
    np.array(xrange(8), dtype='float64').reshape((2,2,2)),
    np.array([0.25, 0.5, 0.75])
) == 2.75

It works when every value is a scalar, but does not when the individual values are tensors and the shapes are not like asserted above. For instance:

assert all(lerp(
    np.array([[1.0, 2.0], [3.0, 4.0]]),
    np.array([0.75])
) == np.array([ 2.5,  3.5]))

How to implement that with pure numpy, without python recursion, playing with array indices etc. so that it work also with tensor values? Is there any numpy function for that?

jaboja
  • 2,178
  • 1
  • 21
  • 35

0 Answers0