I have an image imported in a ndarray with a shape (1027, 888, 3).
My task is to create a method that returns 2 one dimensional arrays of indexes that select a tile from the image.
ii, jj = tile_cordinates(i,j, tile_size)
imshow(image[ii,jj])
I would like to simulate the same result as using this code:
imshow(image[1:32, 2:32])
I tried to do this:
def tile_coordinates(i, j, tile_size):
return range(i, i + tile_size), range(j, j + tile_size)
ii, jj = tile_coordinates(1,2,32)
imshow(image[ii,jj])
But the the image is not right. In fact the result return form indexing the image with the two arrays is (32, 3) while using
image[1:32, 2:32].shape
Returns (31, 30, 3)
So how to form the returned arrays form the tile_coordinates method to simulate the same result as the slicing example? Is it even possible?
PS: The specifications are set from a homework assignment. I have already spend a few hours looking at the documentation and other examples of indexing but have not found anything that could do what is required of me. So I'm quite stuck. Any guidance would be really appreciated :)
Thanks!