I have written a function that efficiently loops through a grayscale image pixels using C++. The key was using pointers to image rows/columns instead of regular pixel access.
The C++ logic looks like this:
Mat image = imread(image_path); //path for the image
for (int j = 0; j < rawDepth.rows; j++)
{
const ushort* Mi = rawDepth.ptr<ushort>(j); //pointer to the current column
for (int i = 0; i < rawDepth.cols; i++)
{
ushort pixelValue = Mi[i]; //value of the pixel
}
}
This method is extremely fast but I need it running on Python. I was able to successfully rewrite it using Cython but now I am stuck with the problem of getting pointers to columns in numpy nd arrays.
My image is stored in a 2D numpy array(I read it using cv2)
I have been trying to efficiently convert my image from an np array to a structure similar to the C++ Mat object that would give me the same efficiency.
I have tried several approaches that I found online but none of them seem to work. I am using Python 3.6.6 and Cython 0.28.5
Thanks
Edit: I was able to implement the solution described here. I now have a cpp file where I can call the nparrayToMat() function from my pyx file.
However, I can't seem to be able to access the .ptr function of Mat.
I would appreciate if someone can point out how to do this.