1

I'm doing live video processing on an ODROID XU4 (Samsung Exynos5422 with Mali-T628 GPU) using OpenCV 4.1 and Python 3.6. I'm able to use the GPU by converting the Numpy arrays containing my images to UMat, e.g:

img_umat = cv2.UMat(img_array)

This done, the image processing code runs faster than it does on the CPU; however the transfer to/from the GPU takes a long time (~0.03 seconds in some cases). Is there any way around this?

I am new to GPU programming and have been scratching my head over section 8.3 here. I don't know how that default "cv2.UMat(array)" initializer is allocating memory, so I've tried to specify it, e.g.

host_mat = cv2.UMat(mat,cv2.USAGE_ALLOCATE_HOST_MEMORY)

But when I do this, no error is thrown and host_mat is empty. Am I doing something wrong, or am I completely on the wrong path? Any suggestions appreciated.

Anna Svagzdys
  • 63
  • 1
  • 7
  • It says it everything in 8.3.1: _To use the buffer on the application processor side, you must map this buffer and write the data into it._ So it looks like you allocated the buffer but didn't copy the data into hence host_map empty. – doqtor Jul 10 '20 at 06:01
  • @doqtor from the docs here I can see how to allocate the memory in python, but how do I copy my data to it? https://docs.opencv.org/4.1.0/d7/d45/classcv_1_1UMat.html – Anna Svagzdys Jul 13 '20 at 14:51
  • ...If I try something like `host_mat[:] = img_array[:]` I get an error: 'cv2.UMat' object does not support item assignment – Anna Svagzdys Jul 13 '20 at 15:08
  • This will rather be OpenCV specific than python and I'm not familiar with OpenCV at all. You need to check that in OpenCV docs. In C++ `clEnqueueMapBuffer` is used to map this newly allocated memory and `memcpy` to copy data into it. – doqtor Jul 13 '20 at 15:09

0 Answers0