0

I am following the ONNX inference tutorial at: https://github.com/onnx/models/blob/main/vision/classification/onnxrt_inference.ipynb.

Instead of doing the pre-processing in pure NumPy, I have re-written the function to be done in CuPy for GPU-acceleration.

The pre-processing function thus looks like:


def preprocess_gpu(cuImage):
    img = cuImage / 255.
    h,w = img.shape[0], img.shape[1]
    y0 = (h - 224) // 2
    x0 = (w - 224) // 2
    img = img[y0 : y0+224, x0 : x0+224, :]
    img = cp.divide(cp.subtract(img , cp.array([0.485, 0.456, 0.406])),  cp.array([0.229, 0.224, 0.225]))
    img = cp.transpose(img, axes=[2, 0, 1])
    img = cp.expand_dims(img, axis=0)
    return img

When feeding such an array into the prediction function,

def predict(path):
    img = get_cuimage(path)
    img = preprocess_gpu(img)
    ort_inputs = {session.get_inputs()[0].name: img}
    preds = session.run(None, ort_inputs)[0]
    preds = np.squeeze(preds)
    a = np.argsort(preds)[::-1]
    print('class=%s ; probability=%f' %(labels[a[0]], preds[a[0:1]]))

predict(path)

I get the error:

RuntimeError: Input must be a list of dictionaries or a single numpy array for input 'data'. 

Are there any work-arounds? I know that the ONNX run-time is currently using the CPU, but that should not be a problem. Furthermore, I can not seem to find ONNXruntime-gpu on Conda anywhere?

Any tips greatly appreciated.

JOKKINATOR
  • 356
  • 1
  • 11

0 Answers0