2

I am currently using scipy minimize for my optimization problem but the compute time is significant. I came across numba which could be used to reduce the compute time. But when i try to use it over my objective function, it throws the following error.

TypingError: Failed in nopython mode pipeline (step: ensure IR is legal prior to lowering) The use of a reflected list(int64)<iv=None> type, assigned to variable 'wInt' in globals, is not supported as globals are considered compile-time constants and there is no known way to compile a reflected list(int64)<iv=None> type as a constant.

Here is a sample code on what i am using currently for my objective function.

#x is a list returned by a function and is run only once at the 
# -beginning of the code execution.
x = someFunc()

@jit(float64(int64), nopython=True, parallel=True)
def fast_rosenbrock(N):
    out = 0.0
    for i in range(N-1):
        out += 100.0 * (x[i+1] - x[i]**2)**2 / (1 - x[i])**2
    return out

The objective function utilzes a global variable which is obtained from calling a function. I am worried that if i make it local, then the corresponding values are calculated repeatedly and i would like to avoid that as the function is very big and is required to run only once. How do i resolve this?

Edit 1:

Tried passing x as an argument. It works without numba but when i make it into a jitted function, it throws an error.

Without numba, am getting desired results:

def fast_rosenbrock(x, N):
    out = 0.0
    for i in range(N-1):
        out += 100.0 * (x[i+1] - x[i]**2)**2 / (1 - x[i])**2
    return out

With numba:

from numba import jit, float64, int64

@jit(float64(float64[:], int64), nopython=True, parallel=True)
def fast_rosenbrock(x, N):
    out = 0.0
    for i in range(N-1):
        out += 100.0 * (x[i+1] - x[i]**2)**2 / (1 - x[i])**2
    return out

This throws an error stating ZeroDivisionError: division by zero

Am i doing anything wrong here?

  • Why don't you pass `x` as an argument? – Nils Werner Oct 28 '20 at 09:14
  • Passing `x` to `fast_rosenbrock` would not lead calling `someFunc`since the `x` is just the return value of that function. Just pass the `x` as argument to `fast_rosenbrock` :) – Niko Föhr Oct 28 '20 at 09:28
  • Thanks @NilsWerner and @np8, i tried passing it as a argument as suggested. Without numba, my optimizer is producing the desired results but when i make it into a jitted function, i am getting a `ZeroDivisionError: division by zero`. Any idea on what could cause that? – Prajwal Ainapur Oct 28 '20 at 09:45

1 Answers1

0

Resolved the error. It seems that numba doesn't support '/' operators. Thus, we need to use np.divide wherever it is required. The following is the updated code

@jit(float64(float64[:], int64), nopython=True, parallel=True)
def rosenbrock(x, N):
    out = 0.0
    for i in range(N-1):
        out += np.divide(100.0 * (x[i+1] - x[i]**2)**2, (1 - x[i])**2)
    return out

Results:

Without Numba: 78.4 ms ± 1.23 ms per loop

With Numba: 6.59 ms ± 152 µs per loop

This is almost 10x improvement in compute time.